00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 177 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3679 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.084 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.085 The recommended git tool is: git 00:00:00.085 using credential 00000000-0000-0000-0000-000000000002 00:00:00.092 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.123 Fetching changes from the remote Git repository 00:00:00.126 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.173 Using shallow fetch with depth 1 00:00:00.173 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.173 > git --version # timeout=10 00:00:00.214 > git --version # 'git version 2.39.2' 00:00:00.214 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.246 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.246 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.142 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.152 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.165 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.165 > git config core.sparsecheckout # timeout=10 00:00:04.174 > git read-tree -mu HEAD # timeout=10 00:00:04.190 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.211 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.211 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.302 [Pipeline] Start of Pipeline 00:00:04.315 [Pipeline] library 00:00:04.316 Loading library shm_lib@master 00:00:04.317 Library shm_lib@master is cached. Copying from home. 00:00:04.329 [Pipeline] node 00:00:04.341 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:04.342 [Pipeline] { 00:00:04.352 [Pipeline] catchError 00:00:04.354 [Pipeline] { 00:00:04.363 [Pipeline] wrap 00:00:04.371 [Pipeline] { 00:00:04.378 [Pipeline] stage 00:00:04.379 [Pipeline] { (Prologue) 00:00:04.558 [Pipeline] sh 00:00:04.841 + logger -p user.info -t JENKINS-CI 00:00:04.860 [Pipeline] echo 00:00:04.862 Node: WFP21 00:00:04.870 [Pipeline] sh 00:00:05.170 [Pipeline] setCustomBuildProperty 00:00:05.182 [Pipeline] echo 00:00:05.183 Cleanup processes 00:00:05.189 [Pipeline] sh 00:00:05.472 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.472 2739633 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.484 [Pipeline] sh 00:00:05.769 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.769 ++ grep -v 'sudo pgrep' 00:00:05.769 ++ awk '{print $1}' 00:00:05.769 + sudo kill -9 00:00:05.769 + true 00:00:05.822 [Pipeline] cleanWs 00:00:05.837 [WS-CLEANUP] Deleting project workspace... 00:00:05.837 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.845 [WS-CLEANUP] done 00:00:05.851 [Pipeline] setCustomBuildProperty 00:00:05.864 [Pipeline] sh 00:00:06.144 + sudo git config --global --replace-all safe.directory '*' 00:00:06.231 [Pipeline] httpRequest 00:00:06.592 [Pipeline] echo 00:00:06.594 Sorcerer 10.211.164.20 is alive 00:00:06.605 [Pipeline] retry 00:00:06.608 [Pipeline] { 00:00:06.649 [Pipeline] httpRequest 00:00:06.654 HttpMethod: GET 00:00:06.654 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.655 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.658 Response Code: HTTP/1.1 200 OK 00:00:06.658 Success: Status code 200 is in the accepted range: 200,404 00:00:06.658 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.380 [Pipeline] } 00:00:07.395 [Pipeline] // retry 00:00:07.401 [Pipeline] sh 00:00:07.682 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.696 [Pipeline] httpRequest 00:00:08.420 [Pipeline] echo 00:00:08.422 Sorcerer 10.211.164.20 is alive 00:00:08.429 [Pipeline] retry 00:00:08.430 [Pipeline] { 00:00:08.442 [Pipeline] httpRequest 00:00:08.447 HttpMethod: GET 00:00:08.447 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:08.448 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:08.468 Response Code: HTTP/1.1 200 OK 00:00:08.468 Success: Status code 200 is in the accepted range: 200,404 00:00:08.469 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:48.335 [Pipeline] } 00:00:48.355 [Pipeline] // retry 00:00:48.364 [Pipeline] sh 00:00:48.661 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:51.218 [Pipeline] sh 00:00:51.505 + git -C spdk log --oneline -n5 00:00:51.505 b18e1bd62 version: v24.09.1-pre 00:00:51.505 19524ad45 version: v24.09 00:00:51.505 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:51.505 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:51.505 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:51.528 [Pipeline] withCredentials 00:00:51.542 > git --version # timeout=10 00:00:51.555 > git --version # 'git version 2.39.2' 00:00:51.574 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:51.576 [Pipeline] { 00:00:51.586 [Pipeline] retry 00:00:51.589 [Pipeline] { 00:00:51.604 [Pipeline] sh 00:00:51.891 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:51.904 [Pipeline] } 00:00:51.928 [Pipeline] // retry 00:00:51.934 [Pipeline] } 00:00:51.955 [Pipeline] // withCredentials 00:00:51.967 [Pipeline] httpRequest 00:00:52.447 [Pipeline] echo 00:00:52.449 Sorcerer 10.211.164.20 is alive 00:00:52.461 [Pipeline] retry 00:00:52.463 [Pipeline] { 00:00:52.480 [Pipeline] httpRequest 00:00:52.486 HttpMethod: GET 00:00:52.486 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:52.487 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:52.496 Response Code: HTTP/1.1 200 OK 00:00:52.497 Success: Status code 200 is in the accepted range: 200,404 00:00:52.497 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:25.394 [Pipeline] } 00:01:25.411 [Pipeline] // retry 00:01:25.419 [Pipeline] sh 00:01:25.704 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:27.097 [Pipeline] sh 00:01:27.382 + git -C dpdk log --oneline -n5 00:01:27.382 caf0f5d395 version: 22.11.4 00:01:27.382 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:27.382 dc9c799c7d vhost: fix missing spinlock unlock 00:01:27.382 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:27.382 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:27.391 [Pipeline] } 00:01:27.402 [Pipeline] // stage 00:01:27.409 [Pipeline] stage 00:01:27.411 [Pipeline] { (Prepare) 00:01:27.425 [Pipeline] writeFile 00:01:27.435 [Pipeline] sh 00:01:27.715 + logger -p user.info -t JENKINS-CI 00:01:27.728 [Pipeline] sh 00:01:28.013 + logger -p user.info -t JENKINS-CI 00:01:28.025 [Pipeline] sh 00:01:28.310 + cat autorun-spdk.conf 00:01:28.310 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.310 SPDK_TEST_NVMF=1 00:01:28.310 SPDK_TEST_NVME_CLI=1 00:01:28.310 SPDK_TEST_NVMF_NICS=mlx5 00:01:28.310 SPDK_RUN_UBSAN=1 00:01:28.310 NET_TYPE=phy 00:01:28.310 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:28.310 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:28.318 RUN_NIGHTLY=1 00:01:28.323 [Pipeline] readFile 00:01:28.350 [Pipeline] withEnv 00:01:28.353 [Pipeline] { 00:01:28.366 [Pipeline] sh 00:01:28.650 + set -ex 00:01:28.650 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:28.650 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:28.650 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.650 ++ SPDK_TEST_NVMF=1 00:01:28.650 ++ SPDK_TEST_NVME_CLI=1 00:01:28.650 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:28.650 ++ SPDK_RUN_UBSAN=1 00:01:28.650 ++ NET_TYPE=phy 00:01:28.650 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:28.650 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:28.650 ++ RUN_NIGHTLY=1 00:01:28.650 + case $SPDK_TEST_NVMF_NICS in 00:01:28.650 + DRIVERS=mlx5_ib 00:01:28.650 + [[ -n mlx5_ib ]] 00:01:28.650 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:28.650 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:35.221 rmmod: ERROR: Module irdma is not currently loaded 00:01:35.221 rmmod: ERROR: Module i40iw is not currently loaded 00:01:35.221 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:35.221 + true 00:01:35.221 + for D in $DRIVERS 00:01:35.221 + sudo modprobe mlx5_ib 00:01:35.221 + exit 0 00:01:35.230 [Pipeline] } 00:01:35.245 [Pipeline] // withEnv 00:01:35.251 [Pipeline] } 00:01:35.265 [Pipeline] // stage 00:01:35.274 [Pipeline] catchError 00:01:35.275 [Pipeline] { 00:01:35.288 [Pipeline] timeout 00:01:35.288 Timeout set to expire in 1 hr 0 min 00:01:35.291 [Pipeline] { 00:01:35.307 [Pipeline] stage 00:01:35.309 [Pipeline] { (Tests) 00:01:35.324 [Pipeline] sh 00:01:35.609 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:35.609 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:35.609 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:35.609 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:35.609 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:35.609 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:35.609 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:35.609 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:35.609 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:35.609 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:35.609 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:35.609 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:35.609 + source /etc/os-release 00:01:35.609 ++ NAME='Fedora Linux' 00:01:35.609 ++ VERSION='39 (Cloud Edition)' 00:01:35.609 ++ ID=fedora 00:01:35.609 ++ VERSION_ID=39 00:01:35.609 ++ VERSION_CODENAME= 00:01:35.609 ++ PLATFORM_ID=platform:f39 00:01:35.609 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:35.609 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:35.609 ++ LOGO=fedora-logo-icon 00:01:35.609 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:35.609 ++ HOME_URL=https://fedoraproject.org/ 00:01:35.609 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:35.609 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:35.609 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:35.609 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:35.609 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:35.609 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:35.609 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:35.609 ++ SUPPORT_END=2024-11-12 00:01:35.609 ++ VARIANT='Cloud Edition' 00:01:35.609 ++ VARIANT_ID=cloud 00:01:35.609 + uname -a 00:01:35.609 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:35.609 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:38.904 Hugepages 00:01:38.904 node hugesize free / total 00:01:38.904 node0 1048576kB 0 / 0 00:01:38.904 node0 2048kB 0 / 0 00:01:38.904 node1 1048576kB 0 / 0 00:01:38.904 node1 2048kB 0 / 0 00:01:38.904 00:01:38.904 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:38.904 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:38.904 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:38.904 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:38.904 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:38.904 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:38.904 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:38.904 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:38.904 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:38.904 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:38.904 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:38.904 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:38.904 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:38.904 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:38.904 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:38.904 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:38.904 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:38.904 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:38.904 + rm -f /tmp/spdk-ld-path 00:01:38.904 + source autorun-spdk.conf 00:01:38.904 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.904 ++ SPDK_TEST_NVMF=1 00:01:38.904 ++ SPDK_TEST_NVME_CLI=1 00:01:38.904 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:38.904 ++ SPDK_RUN_UBSAN=1 00:01:38.904 ++ NET_TYPE=phy 00:01:38.904 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:38.904 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:38.904 ++ RUN_NIGHTLY=1 00:01:38.904 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:38.904 + [[ -n '' ]] 00:01:38.904 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:38.904 + for M in /var/spdk/build-*-manifest.txt 00:01:38.904 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:38.904 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:38.904 + for M in /var/spdk/build-*-manifest.txt 00:01:38.904 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:38.904 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:38.904 + for M in /var/spdk/build-*-manifest.txt 00:01:38.904 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:38.904 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:38.904 ++ uname 00:01:38.904 + [[ Linux == \L\i\n\u\x ]] 00:01:38.904 + sudo dmesg -T 00:01:38.904 + sudo dmesg --clear 00:01:38.904 + dmesg_pid=2741161 00:01:38.904 + [[ Fedora Linux == FreeBSD ]] 00:01:38.904 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.904 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.904 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:38.904 + [[ -x /usr/src/fio-static/fio ]] 00:01:38.904 + export FIO_BIN=/usr/src/fio-static/fio 00:01:38.904 + FIO_BIN=/usr/src/fio-static/fio 00:01:38.904 + sudo dmesg -Tw 00:01:38.904 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:38.904 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:38.904 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:38.904 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.904 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.904 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:38.904 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.904 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.904 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:38.904 Test configuration: 00:01:38.904 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.904 SPDK_TEST_NVMF=1 00:01:38.904 SPDK_TEST_NVME_CLI=1 00:01:38.904 SPDK_TEST_NVMF_NICS=mlx5 00:01:38.904 SPDK_RUN_UBSAN=1 00:01:38.904 NET_TYPE=phy 00:01:38.904 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:38.905 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:38.905 RUN_NIGHTLY=1 21:31:11 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:38.905 21:31:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:38.905 21:31:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:38.905 21:31:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:38.905 21:31:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:38.905 21:31:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:38.905 21:31:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.905 21:31:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.905 21:31:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.905 21:31:11 -- paths/export.sh@5 -- $ export PATH 00:01:38.905 21:31:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.905 21:31:11 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:38.905 21:31:11 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:38.905 21:31:11 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732912271.XXXXXX 00:01:38.905 21:31:11 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732912271.0KxR9N 00:01:38.905 21:31:11 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:38.905 21:31:11 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:01:38.905 21:31:11 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:38.905 21:31:11 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:01:38.905 21:31:11 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:38.905 21:31:11 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:38.905 21:31:11 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:38.905 21:31:11 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:38.905 21:31:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.905 21:31:11 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:01:38.905 21:31:11 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:38.905 21:31:11 -- pm/common@17 -- $ local monitor 00:01:38.905 21:31:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.905 21:31:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.905 21:31:11 -- pm/common@21 -- $ date +%s 00:01:38.905 21:31:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.905 21:31:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.905 21:31:11 -- pm/common@21 -- $ date +%s 00:01:38.905 21:31:11 -- pm/common@25 -- $ sleep 1 00:01:38.905 21:31:11 -- pm/common@21 -- $ date +%s 00:01:38.905 21:31:11 -- pm/common@21 -- $ date +%s 00:01:38.905 21:31:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732912271 00:01:38.905 21:31:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732912271 00:01:38.905 21:31:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732912271 00:01:38.905 21:31:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732912271 00:01:39.164 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732912271_collect-vmstat.pm.log 00:01:39.164 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732912271_collect-cpu-load.pm.log 00:01:39.165 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732912271_collect-cpu-temp.pm.log 00:01:39.165 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732912271_collect-bmc-pm.bmc.pm.log 00:01:40.104 21:31:12 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:40.104 21:31:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:40.104 21:31:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:40.104 21:31:12 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:40.104 21:31:12 -- spdk/autobuild.sh@16 -- $ date -u 00:01:40.104 Fri Nov 29 08:31:12 PM UTC 2024 00:01:40.104 21:31:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:40.104 v24.09-1-gb18e1bd62 00:01:40.104 21:31:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:40.104 21:31:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:40.105 21:31:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:40.105 21:31:12 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:40.105 21:31:12 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:40.105 21:31:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.105 ************************************ 00:01:40.105 START TEST ubsan 00:01:40.105 ************************************ 00:01:40.105 21:31:12 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:40.105 using ubsan 00:01:40.105 00:01:40.105 real 0m0.001s 00:01:40.105 user 0m0.000s 00:01:40.105 sys 0m0.000s 00:01:40.105 21:31:12 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:40.105 21:31:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.105 ************************************ 00:01:40.105 END TEST ubsan 00:01:40.105 ************************************ 00:01:40.105 21:31:12 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:40.105 21:31:12 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:40.105 21:31:12 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:40.105 21:31:12 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:40.105 21:31:12 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:40.105 21:31:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.105 ************************************ 00:01:40.105 START TEST build_native_dpdk 00:01:40.105 ************************************ 00:01:40.105 21:31:12 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:01:40.105 caf0f5d395 version: 22.11.4 00:01:40.105 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:40.105 dc9c799c7d vhost: fix missing spinlock unlock 00:01:40.105 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:40.105 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:40.105 patching file config/rte_config.h 00:01:40.105 Hunk #1 succeeded at 60 (offset 1 line). 00:01:40.105 21:31:12 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:40.105 21:31:12 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:40.365 21:31:12 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:40.365 patching file lib/pcapng/rte_pcapng.c 00:01:40.365 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:40.365 21:31:12 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:40.365 21:31:12 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:40.365 21:31:12 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:40.365 21:31:12 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:40.365 21:31:12 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:40.366 21:31:12 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:40.366 21:31:12 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:45.641 The Meson build system 00:01:45.641 Version: 1.5.0 00:01:45.641 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:45.641 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:01:45.641 Build type: native build 00:01:45.641 Program cat found: YES (/usr/bin/cat) 00:01:45.641 Project name: DPDK 00:01:45.641 Project version: 22.11.4 00:01:45.641 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:45.641 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:45.641 Host machine cpu family: x86_64 00:01:45.641 Host machine cpu: x86_64 00:01:45.641 Message: ## Building in Developer Mode ## 00:01:45.641 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:45.641 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:45.641 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:45.641 Program objdump found: YES (/usr/bin/objdump) 00:01:45.641 Program python3 found: YES (/usr/bin/python3) 00:01:45.641 Program cat found: YES (/usr/bin/cat) 00:01:45.641 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:45.641 Checking for size of "void *" : 8 00:01:45.641 Checking for size of "void *" : 8 (cached) 00:01:45.641 Library m found: YES 00:01:45.641 Library numa found: YES 00:01:45.641 Has header "numaif.h" : YES 00:01:45.641 Library fdt found: NO 00:01:45.641 Library execinfo found: NO 00:01:45.641 Has header "execinfo.h" : YES 00:01:45.641 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:45.641 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:45.641 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:45.641 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:45.641 Run-time dependency openssl found: YES 3.1.1 00:01:45.641 Run-time dependency libpcap found: YES 1.10.4 00:01:45.641 Has header "pcap.h" with dependency libpcap: YES 00:01:45.641 Compiler for C supports arguments -Wcast-qual: YES 00:01:45.641 Compiler for C supports arguments -Wdeprecated: YES 00:01:45.641 Compiler for C supports arguments -Wformat: YES 00:01:45.641 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:45.641 Compiler for C supports arguments -Wformat-security: NO 00:01:45.641 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.641 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:45.641 Compiler for C supports arguments -Wnested-externs: YES 00:01:45.641 Compiler for C supports arguments -Wold-style-definition: YES 00:01:45.641 Compiler for C supports arguments -Wpointer-arith: YES 00:01:45.641 Compiler for C supports arguments -Wsign-compare: YES 00:01:45.641 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:45.641 Compiler for C supports arguments -Wundef: YES 00:01:45.641 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.641 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:45.641 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:45.641 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.641 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:45.641 Compiler for C supports arguments -mavx512f: YES 00:01:45.641 Checking if "AVX512 checking" compiles: YES 00:01:45.641 Fetching value of define "__SSE4_2__" : 1 00:01:45.641 Fetching value of define "__AES__" : 1 00:01:45.641 Fetching value of define "__AVX__" : 1 00:01:45.641 Fetching value of define "__AVX2__" : 1 00:01:45.641 Fetching value of define "__AVX512BW__" : 1 00:01:45.641 Fetching value of define "__AVX512CD__" : 1 00:01:45.641 Fetching value of define "__AVX512DQ__" : 1 00:01:45.641 Fetching value of define "__AVX512F__" : 1 00:01:45.641 Fetching value of define "__AVX512VL__" : 1 00:01:45.641 Fetching value of define "__PCLMUL__" : 1 00:01:45.641 Fetching value of define "__RDRND__" : 1 00:01:45.641 Fetching value of define "__RDSEED__" : 1 00:01:45.641 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:45.641 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:45.641 Message: lib/kvargs: Defining dependency "kvargs" 00:01:45.641 Message: lib/telemetry: Defining dependency "telemetry" 00:01:45.641 Checking for function "getentropy" : YES 00:01:45.641 Message: lib/eal: Defining dependency "eal" 00:01:45.641 Message: lib/ring: Defining dependency "ring" 00:01:45.641 Message: lib/rcu: Defining dependency "rcu" 00:01:45.641 Message: lib/mempool: Defining dependency "mempool" 00:01:45.641 Message: lib/mbuf: Defining dependency "mbuf" 00:01:45.641 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:45.641 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.641 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:45.641 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:45.641 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:45.641 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:45.641 Compiler for C supports arguments -mpclmul: YES 00:01:45.641 Compiler for C supports arguments -maes: YES 00:01:45.641 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.641 Compiler for C supports arguments -mavx512bw: YES 00:01:45.641 Compiler for C supports arguments -mavx512dq: YES 00:01:45.641 Compiler for C supports arguments -mavx512vl: YES 00:01:45.641 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:45.641 Compiler for C supports arguments -mavx2: YES 00:01:45.641 Compiler for C supports arguments -mavx: YES 00:01:45.641 Message: lib/net: Defining dependency "net" 00:01:45.641 Message: lib/meter: Defining dependency "meter" 00:01:45.641 Message: lib/ethdev: Defining dependency "ethdev" 00:01:45.641 Message: lib/pci: Defining dependency "pci" 00:01:45.641 Message: lib/cmdline: Defining dependency "cmdline" 00:01:45.641 Message: lib/metrics: Defining dependency "metrics" 00:01:45.641 Message: lib/hash: Defining dependency "hash" 00:01:45.641 Message: lib/timer: Defining dependency "timer" 00:01:45.641 Fetching value of define "__AVX2__" : 1 (cached) 00:01:45.641 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.641 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:45.641 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:45.641 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:45.641 Message: lib/acl: Defining dependency "acl" 00:01:45.641 Message: lib/bbdev: Defining dependency "bbdev" 00:01:45.641 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:45.641 Run-time dependency libelf found: YES 0.191 00:01:45.641 Message: lib/bpf: Defining dependency "bpf" 00:01:45.641 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:45.641 Message: lib/compressdev: Defining dependency "compressdev" 00:01:45.641 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:45.641 Message: lib/distributor: Defining dependency "distributor" 00:01:45.641 Message: lib/efd: Defining dependency "efd" 00:01:45.641 Message: lib/eventdev: Defining dependency "eventdev" 00:01:45.641 Message: lib/gpudev: Defining dependency "gpudev" 00:01:45.641 Message: lib/gro: Defining dependency "gro" 00:01:45.641 Message: lib/gso: Defining dependency "gso" 00:01:45.641 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:45.641 Message: lib/jobstats: Defining dependency "jobstats" 00:01:45.641 Message: lib/latencystats: Defining dependency "latencystats" 00:01:45.641 Message: lib/lpm: Defining dependency "lpm" 00:01:45.641 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.641 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:45.641 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:45.641 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:45.641 Message: lib/member: Defining dependency "member" 00:01:45.641 Message: lib/pcapng: Defining dependency "pcapng" 00:01:45.641 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:45.641 Message: lib/power: Defining dependency "power" 00:01:45.641 Message: lib/rawdev: Defining dependency "rawdev" 00:01:45.641 Message: lib/regexdev: Defining dependency "regexdev" 00:01:45.641 Message: lib/dmadev: Defining dependency "dmadev" 00:01:45.641 Message: lib/rib: Defining dependency "rib" 00:01:45.641 Message: lib/reorder: Defining dependency "reorder" 00:01:45.641 Message: lib/sched: Defining dependency "sched" 00:01:45.641 Message: lib/security: Defining dependency "security" 00:01:45.641 Message: lib/stack: Defining dependency "stack" 00:01:45.641 Has header "linux/userfaultfd.h" : YES 00:01:45.641 Message: lib/vhost: Defining dependency "vhost" 00:01:45.641 Message: lib/ipsec: Defining dependency "ipsec" 00:01:45.641 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.641 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:45.641 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:45.641 Message: lib/fib: Defining dependency "fib" 00:01:45.641 Message: lib/port: Defining dependency "port" 00:01:45.641 Message: lib/pdump: Defining dependency "pdump" 00:01:45.641 Message: lib/table: Defining dependency "table" 00:01:45.641 Message: lib/pipeline: Defining dependency "pipeline" 00:01:45.641 Message: lib/graph: Defining dependency "graph" 00:01:45.641 Message: lib/node: Defining dependency "node" 00:01:45.641 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:45.641 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:45.642 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:45.642 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:45.642 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:45.642 Compiler for C supports arguments -Wno-unused-value: YES 00:01:45.642 Compiler for C supports arguments -Wno-format: YES 00:01:45.642 Compiler for C supports arguments -Wno-format-security: YES 00:01:45.642 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:45.903 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:45.903 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:45.903 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:45.903 Fetching value of define "__AVX2__" : 1 (cached) 00:01:45.903 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.903 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:45.903 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.903 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:45.903 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:45.903 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:45.903 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:45.903 Configuring doxy-api.conf using configuration 00:01:45.903 Program sphinx-build found: NO 00:01:45.903 Configuring rte_build_config.h using configuration 00:01:45.903 Message: 00:01:45.903 ================= 00:01:45.903 Applications Enabled 00:01:45.903 ================= 00:01:45.903 00:01:45.903 apps: 00:01:45.903 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:45.903 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:45.903 test-security-perf, 00:01:45.903 00:01:45.903 Message: 00:01:45.903 ================= 00:01:45.903 Libraries Enabled 00:01:45.903 ================= 00:01:45.903 00:01:45.903 libs: 00:01:45.903 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:45.903 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:45.903 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:45.903 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:45.903 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:45.903 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:45.903 table, pipeline, graph, node, 00:01:45.903 00:01:45.903 Message: 00:01:45.903 =============== 00:01:45.903 Drivers Enabled 00:01:45.903 =============== 00:01:45.903 00:01:45.903 common: 00:01:45.903 00:01:45.903 bus: 00:01:45.903 pci, vdev, 00:01:45.903 mempool: 00:01:45.903 ring, 00:01:45.903 dma: 00:01:45.903 00:01:45.903 net: 00:01:45.903 i40e, 00:01:45.903 raw: 00:01:45.903 00:01:45.903 crypto: 00:01:45.903 00:01:45.903 compress: 00:01:45.903 00:01:45.903 regex: 00:01:45.903 00:01:45.903 vdpa: 00:01:45.903 00:01:45.903 event: 00:01:45.903 00:01:45.903 baseband: 00:01:45.903 00:01:45.903 gpu: 00:01:45.903 00:01:45.903 00:01:45.903 Message: 00:01:45.903 ================= 00:01:45.903 Content Skipped 00:01:45.903 ================= 00:01:45.903 00:01:45.903 apps: 00:01:45.903 00:01:45.903 libs: 00:01:45.903 kni: explicitly disabled via build config (deprecated lib) 00:01:45.903 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:45.903 00:01:45.903 drivers: 00:01:45.903 common/cpt: not in enabled drivers build config 00:01:45.903 common/dpaax: not in enabled drivers build config 00:01:45.903 common/iavf: not in enabled drivers build config 00:01:45.903 common/idpf: not in enabled drivers build config 00:01:45.903 common/mvep: not in enabled drivers build config 00:01:45.903 common/octeontx: not in enabled drivers build config 00:01:45.903 bus/auxiliary: not in enabled drivers build config 00:01:45.903 bus/dpaa: not in enabled drivers build config 00:01:45.903 bus/fslmc: not in enabled drivers build config 00:01:45.903 bus/ifpga: not in enabled drivers build config 00:01:45.903 bus/vmbus: not in enabled drivers build config 00:01:45.903 common/cnxk: not in enabled drivers build config 00:01:45.903 common/mlx5: not in enabled drivers build config 00:01:45.903 common/qat: not in enabled drivers build config 00:01:45.903 common/sfc_efx: not in enabled drivers build config 00:01:45.903 mempool/bucket: not in enabled drivers build config 00:01:45.903 mempool/cnxk: not in enabled drivers build config 00:01:45.903 mempool/dpaa: not in enabled drivers build config 00:01:45.903 mempool/dpaa2: not in enabled drivers build config 00:01:45.903 mempool/octeontx: not in enabled drivers build config 00:01:45.903 mempool/stack: not in enabled drivers build config 00:01:45.903 dma/cnxk: not in enabled drivers build config 00:01:45.903 dma/dpaa: not in enabled drivers build config 00:01:45.903 dma/dpaa2: not in enabled drivers build config 00:01:45.903 dma/hisilicon: not in enabled drivers build config 00:01:45.903 dma/idxd: not in enabled drivers build config 00:01:45.903 dma/ioat: not in enabled drivers build config 00:01:45.903 dma/skeleton: not in enabled drivers build config 00:01:45.903 net/af_packet: not in enabled drivers build config 00:01:45.903 net/af_xdp: not in enabled drivers build config 00:01:45.903 net/ark: not in enabled drivers build config 00:01:45.903 net/atlantic: not in enabled drivers build config 00:01:45.903 net/avp: not in enabled drivers build config 00:01:45.903 net/axgbe: not in enabled drivers build config 00:01:45.903 net/bnx2x: not in enabled drivers build config 00:01:45.903 net/bnxt: not in enabled drivers build config 00:01:45.903 net/bonding: not in enabled drivers build config 00:01:45.903 net/cnxk: not in enabled drivers build config 00:01:45.903 net/cxgbe: not in enabled drivers build config 00:01:45.903 net/dpaa: not in enabled drivers build config 00:01:45.903 net/dpaa2: not in enabled drivers build config 00:01:45.903 net/e1000: not in enabled drivers build config 00:01:45.903 net/ena: not in enabled drivers build config 00:01:45.903 net/enetc: not in enabled drivers build config 00:01:45.903 net/enetfec: not in enabled drivers build config 00:01:45.903 net/enic: not in enabled drivers build config 00:01:45.903 net/failsafe: not in enabled drivers build config 00:01:45.903 net/fm10k: not in enabled drivers build config 00:01:45.903 net/gve: not in enabled drivers build config 00:01:45.903 net/hinic: not in enabled drivers build config 00:01:45.903 net/hns3: not in enabled drivers build config 00:01:45.903 net/iavf: not in enabled drivers build config 00:01:45.903 net/ice: not in enabled drivers build config 00:01:45.903 net/idpf: not in enabled drivers build config 00:01:45.903 net/igc: not in enabled drivers build config 00:01:45.903 net/ionic: not in enabled drivers build config 00:01:45.903 net/ipn3ke: not in enabled drivers build config 00:01:45.903 net/ixgbe: not in enabled drivers build config 00:01:45.903 net/kni: not in enabled drivers build config 00:01:45.903 net/liquidio: not in enabled drivers build config 00:01:45.903 net/mana: not in enabled drivers build config 00:01:45.903 net/memif: not in enabled drivers build config 00:01:45.903 net/mlx4: not in enabled drivers build config 00:01:45.903 net/mlx5: not in enabled drivers build config 00:01:45.903 net/mvneta: not in enabled drivers build config 00:01:45.903 net/mvpp2: not in enabled drivers build config 00:01:45.903 net/netvsc: not in enabled drivers build config 00:01:45.903 net/nfb: not in enabled drivers build config 00:01:45.903 net/nfp: not in enabled drivers build config 00:01:45.903 net/ngbe: not in enabled drivers build config 00:01:45.903 net/null: not in enabled drivers build config 00:01:45.903 net/octeontx: not in enabled drivers build config 00:01:45.903 net/octeon_ep: not in enabled drivers build config 00:01:45.903 net/pcap: not in enabled drivers build config 00:01:45.903 net/pfe: not in enabled drivers build config 00:01:45.903 net/qede: not in enabled drivers build config 00:01:45.903 net/ring: not in enabled drivers build config 00:01:45.904 net/sfc: not in enabled drivers build config 00:01:45.904 net/softnic: not in enabled drivers build config 00:01:45.904 net/tap: not in enabled drivers build config 00:01:45.904 net/thunderx: not in enabled drivers build config 00:01:45.904 net/txgbe: not in enabled drivers build config 00:01:45.904 net/vdev_netvsc: not in enabled drivers build config 00:01:45.904 net/vhost: not in enabled drivers build config 00:01:45.904 net/virtio: not in enabled drivers build config 00:01:45.904 net/vmxnet3: not in enabled drivers build config 00:01:45.904 raw/cnxk_bphy: not in enabled drivers build config 00:01:45.904 raw/cnxk_gpio: not in enabled drivers build config 00:01:45.904 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:45.904 raw/ifpga: not in enabled drivers build config 00:01:45.904 raw/ntb: not in enabled drivers build config 00:01:45.904 raw/skeleton: not in enabled drivers build config 00:01:45.904 crypto/armv8: not in enabled drivers build config 00:01:45.904 crypto/bcmfs: not in enabled drivers build config 00:01:45.904 crypto/caam_jr: not in enabled drivers build config 00:01:45.904 crypto/ccp: not in enabled drivers build config 00:01:45.904 crypto/cnxk: not in enabled drivers build config 00:01:45.904 crypto/dpaa_sec: not in enabled drivers build config 00:01:45.904 crypto/dpaa2_sec: not in enabled drivers build config 00:01:45.904 crypto/ipsec_mb: not in enabled drivers build config 00:01:45.904 crypto/mlx5: not in enabled drivers build config 00:01:45.904 crypto/mvsam: not in enabled drivers build config 00:01:45.904 crypto/nitrox: not in enabled drivers build config 00:01:45.904 crypto/null: not in enabled drivers build config 00:01:45.904 crypto/octeontx: not in enabled drivers build config 00:01:45.904 crypto/openssl: not in enabled drivers build config 00:01:45.904 crypto/scheduler: not in enabled drivers build config 00:01:45.904 crypto/uadk: not in enabled drivers build config 00:01:45.904 crypto/virtio: not in enabled drivers build config 00:01:45.904 compress/isal: not in enabled drivers build config 00:01:45.904 compress/mlx5: not in enabled drivers build config 00:01:45.904 compress/octeontx: not in enabled drivers build config 00:01:45.904 compress/zlib: not in enabled drivers build config 00:01:45.904 regex/mlx5: not in enabled drivers build config 00:01:45.904 regex/cn9k: not in enabled drivers build config 00:01:45.904 vdpa/ifc: not in enabled drivers build config 00:01:45.904 vdpa/mlx5: not in enabled drivers build config 00:01:45.904 vdpa/sfc: not in enabled drivers build config 00:01:45.904 event/cnxk: not in enabled drivers build config 00:01:45.904 event/dlb2: not in enabled drivers build config 00:01:45.904 event/dpaa: not in enabled drivers build config 00:01:45.904 event/dpaa2: not in enabled drivers build config 00:01:45.904 event/dsw: not in enabled drivers build config 00:01:45.904 event/opdl: not in enabled drivers build config 00:01:45.904 event/skeleton: not in enabled drivers build config 00:01:45.904 event/sw: not in enabled drivers build config 00:01:45.904 event/octeontx: not in enabled drivers build config 00:01:45.904 baseband/acc: not in enabled drivers build config 00:01:45.904 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:45.904 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:45.904 baseband/la12xx: not in enabled drivers build config 00:01:45.904 baseband/null: not in enabled drivers build config 00:01:45.904 baseband/turbo_sw: not in enabled drivers build config 00:01:45.904 gpu/cuda: not in enabled drivers build config 00:01:45.904 00:01:45.904 00:01:45.904 Build targets in project: 311 00:01:45.904 00:01:45.904 DPDK 22.11.4 00:01:45.904 00:01:45.904 User defined options 00:01:45.904 libdir : lib 00:01:45.904 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:45.904 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:45.904 c_link_args : 00:01:45.904 enable_docs : false 00:01:45.904 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:45.904 enable_kmods : false 00:01:45.904 machine : native 00:01:45.904 tests : false 00:01:45.904 00:01:45.904 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.904 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:46.169 21:31:18 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:01:46.169 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:46.169 [1/740] Generating lib/rte_kvargs_def with a custom command 00:01:46.169 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:46.169 [3/740] Generating lib/rte_telemetry_def with a custom command 00:01:46.169 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:46.169 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:46.169 [6/740] Generating lib/rte_eal_def with a custom command 00:01:46.436 [7/740] Generating lib/rte_ring_def with a custom command 00:01:46.436 [8/740] Generating lib/rte_rcu_def with a custom command 00:01:46.436 [9/740] Generating lib/rte_eal_mingw with a custom command 00:01:46.436 [10/740] Generating lib/rte_mempool_def with a custom command 00:01:46.436 [11/740] Generating lib/rte_ring_mingw with a custom command 00:01:46.436 [12/740] Generating lib/rte_net_mingw with a custom command 00:01:46.436 [13/740] Generating lib/rte_meter_mingw with a custom command 00:01:46.436 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:46.436 [15/740] Generating lib/rte_rcu_mingw with a custom command 00:01:46.436 [16/740] Generating lib/rte_mempool_mingw with a custom command 00:01:46.436 [17/740] Generating lib/rte_mbuf_def with a custom command 00:01:46.436 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:46.436 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:46.436 [20/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:46.436 [21/740] Generating lib/rte_net_def with a custom command 00:01:46.436 [22/740] Generating lib/rte_meter_def with a custom command 00:01:46.436 [23/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:46.436 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:46.436 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:46.436 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:46.436 [27/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:46.436 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:46.436 [29/740] Generating lib/rte_pci_mingw with a custom command 00:01:46.436 [30/740] Generating lib/rte_ethdev_mingw with a custom command 00:01:46.436 [31/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:46.436 [32/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:46.436 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:46.436 [34/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:46.436 [35/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:46.436 [36/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:46.436 [37/740] Generating lib/rte_ethdev_def with a custom command 00:01:46.436 [38/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:46.436 [39/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:46.436 [40/740] Generating lib/rte_pci_def with a custom command 00:01:46.436 [41/740] Linking static target lib/librte_kvargs.a 00:01:46.436 [42/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:46.436 [43/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:46.436 [44/740] Generating lib/rte_cmdline_mingw with a custom command 00:01:46.436 [45/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:46.436 [46/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:46.436 [47/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:46.436 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:46.436 [49/740] Generating lib/rte_cmdline_def with a custom command 00:01:46.436 [50/740] Generating lib/rte_metrics_def with a custom command 00:01:46.436 [51/740] Generating lib/rte_metrics_mingw with a custom command 00:01:46.436 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:46.436 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:46.436 [54/740] Generating lib/rte_hash_def with a custom command 00:01:46.436 [55/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:46.436 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:46.436 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:46.436 [58/740] Generating lib/rte_hash_mingw with a custom command 00:01:46.436 [59/740] Generating lib/rte_timer_def with a custom command 00:01:46.436 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:46.436 [61/740] Generating lib/rte_timer_mingw with a custom command 00:01:46.436 [62/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:46.436 [63/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:46.436 [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:46.436 [65/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:46.436 [66/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:46.436 [67/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.436 [68/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:46.436 [69/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:46.436 [70/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:46.436 [71/740] Generating lib/rte_acl_def with a custom command 00:01:46.436 [72/740] Generating lib/rte_acl_mingw with a custom command 00:01:46.436 [73/740] Generating lib/rte_bbdev_def with a custom command 00:01:46.436 [74/740] Generating lib/rte_bbdev_mingw with a custom command 00:01:46.436 [75/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:46.436 [76/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:46.436 [77/740] Generating lib/rte_bitratestats_def with a custom command 00:01:46.436 [78/740] Generating lib/rte_bitratestats_mingw with a custom command 00:01:46.436 [79/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:46.436 [80/740] Linking static target lib/librte_meter.a 00:01:46.695 [81/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:46.695 [82/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:46.695 [83/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:46.695 [84/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:46.695 [85/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:46.695 [86/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:46.695 [87/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:46.695 [88/740] Generating lib/rte_bpf_mingw with a custom command 00:01:46.695 [89/740] Generating lib/rte_cfgfile_def with a custom command 00:01:46.695 [90/740] Generating lib/rte_bpf_def with a custom command 00:01:46.695 [91/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:46.695 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:46.695 [93/740] Generating lib/rte_cfgfile_mingw with a custom command 00:01:46.695 [94/740] Linking static target lib/librte_pci.a 00:01:46.695 [95/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:46.695 [96/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:46.695 [97/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:46.695 [98/740] Generating lib/rte_compressdev_def with a custom command 00:01:46.695 [99/740] Generating lib/rte_compressdev_mingw with a custom command 00:01:46.695 [100/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:46.695 [101/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:46.695 [102/740] Generating lib/rte_cryptodev_mingw with a custom command 00:01:46.695 [103/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:46.695 [104/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:46.695 [105/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:46.695 [106/740] Generating lib/rte_cryptodev_def with a custom command 00:01:46.695 [107/740] Generating lib/rte_distributor_def with a custom command 00:01:46.695 [108/740] Generating lib/rte_distributor_mingw with a custom command 00:01:46.695 [109/740] Generating lib/rte_efd_def with a custom command 00:01:46.695 [110/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:46.695 [111/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:46.695 [112/740] Generating lib/rte_efd_mingw with a custom command 00:01:46.695 [113/740] Linking static target lib/librte_ring.a 00:01:46.695 [114/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:46.695 [115/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:46.695 [116/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:46.695 [117/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:46.695 [118/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:46.695 [119/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:46.695 [120/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:46.695 [121/740] Generating lib/rte_eventdev_def with a custom command 00:01:46.695 [122/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:46.695 [123/740] Generating lib/rte_eventdev_mingw with a custom command 00:01:46.695 [124/740] Generating lib/rte_gpudev_def with a custom command 00:01:46.696 [125/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:46.696 [126/740] Generating lib/rte_gpudev_mingw with a custom command 00:01:46.696 [127/740] Generating lib/rte_gro_def with a custom command 00:01:46.696 [128/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:46.696 [129/740] Generating lib/rte_gro_mingw with a custom command 00:01:46.696 [130/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:46.696 [131/740] Generating lib/rte_gso_def with a custom command 00:01:46.696 [132/740] Generating lib/rte_gso_mingw with a custom command 00:01:46.957 [133/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:46.957 [134/740] Generating lib/rte_ip_frag_def with a custom command 00:01:46.957 [135/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.957 [136/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:46.957 [137/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.957 [138/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:46.957 [139/740] Generating lib/rte_ip_frag_mingw with a custom command 00:01:46.957 [140/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.957 [141/740] Linking target lib/librte_kvargs.so.23.0 00:01:46.957 [142/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:46.957 [143/740] Generating lib/rte_jobstats_mingw with a custom command 00:01:46.957 [144/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:46.957 [145/740] Generating lib/rte_jobstats_def with a custom command 00:01:46.957 [146/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.957 [147/740] Generating lib/rte_latencystats_def with a custom command 00:01:46.957 [148/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:46.957 [149/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:46.957 [150/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:46.957 [151/740] Linking static target lib/librte_cfgfile.a 00:01:46.957 [152/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:46.957 [153/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:46.957 [154/740] Generating lib/rte_latencystats_mingw with a custom command 00:01:46.957 [155/740] Generating lib/rte_lpm_mingw with a custom command 00:01:46.957 [156/740] Generating lib/rte_lpm_def with a custom command 00:01:46.957 [157/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:46.957 [158/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:46.957 [159/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:46.957 [160/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:46.957 [161/740] Generating lib/rte_member_def with a custom command 00:01:46.957 [162/740] Generating lib/rte_member_mingw with a custom command 00:01:46.957 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:46.957 [164/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:46.957 [165/740] Generating lib/rte_pcapng_def with a custom command 00:01:46.957 [166/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:47.221 [167/740] Generating lib/rte_pcapng_mingw with a custom command 00:01:47.221 [168/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:47.221 [169/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:47.221 [170/740] Linking static target lib/librte_jobstats.a 00:01:47.221 [171/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.221 [172/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:47.221 [173/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:47.221 [174/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.221 [175/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.221 [176/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:47.221 [177/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:47.221 [178/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:47.221 [179/740] Linking static target lib/librte_cmdline.a 00:01:47.221 [180/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:47.221 [181/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:47.221 [182/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:47.221 [183/740] Linking static target lib/librte_timer.a 00:01:47.221 [184/740] Generating lib/rte_power_mingw with a custom command 00:01:47.221 [185/740] Generating lib/rte_power_def with a custom command 00:01:47.221 [186/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:47.221 [187/740] Linking static target lib/librte_metrics.a 00:01:47.221 [188/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:47.221 [189/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:47.221 [190/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:47.221 [191/740] Generating lib/rte_rawdev_def with a custom command 00:01:47.221 [192/740] Generating lib/rte_rawdev_mingw with a custom command 00:01:47.221 [193/740] Linking static target lib/librte_telemetry.a 00:01:47.221 [194/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:47.221 [195/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:47.221 [196/740] Generating lib/rte_regexdev_def with a custom command 00:01:47.221 [197/740] Generating lib/rte_regexdev_mingw with a custom command 00:01:47.221 [198/740] Generating lib/rte_dmadev_def with a custom command 00:01:47.221 [199/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:47.221 [200/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:47.221 [201/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:47.221 [202/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.221 [203/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:47.221 [204/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:47.221 [205/740] Generating lib/rte_dmadev_mingw with a custom command 00:01:47.221 [206/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:47.221 [207/740] Generating lib/rte_rib_def with a custom command 00:01:47.221 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:47.221 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:47.222 [210/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:47.222 [211/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:47.222 [212/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:47.222 [213/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:47.222 [214/740] Generating lib/rte_reorder_def with a custom command 00:01:47.222 [215/740] Generating lib/rte_reorder_mingw with a custom command 00:01:47.222 [216/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:47.222 [217/740] Generating lib/rte_rib_mingw with a custom command 00:01:47.222 [218/740] Linking static target lib/librte_bitratestats.a 00:01:47.222 [219/740] Generating lib/rte_sched_def with a custom command 00:01:47.222 [220/740] Generating lib/rte_sched_mingw with a custom command 00:01:47.222 [221/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:47.222 [222/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:47.222 [223/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:47.222 [224/740] Generating lib/rte_security_mingw with a custom command 00:01:47.222 [225/740] Generating lib/rte_security_def with a custom command 00:01:47.222 [226/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:47.222 [227/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:47.222 [228/740] Generating lib/rte_stack_def with a custom command 00:01:47.222 [229/740] Generating lib/rte_stack_mingw with a custom command 00:01:47.222 [230/740] Linking static target lib/librte_net.a 00:01:47.222 [231/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:47.222 [232/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:47.222 [233/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:47.222 [234/740] Generating lib/rte_vhost_def with a custom command 00:01:47.222 [235/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:47.222 [236/740] Generating lib/rte_vhost_mingw with a custom command 00:01:47.222 [237/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:47.222 [238/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:47.485 [239/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:47.485 [240/740] Generating lib/rte_ipsec_def with a custom command 00:01:47.485 [241/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:47.485 [242/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:47.485 [243/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:47.485 [244/740] Generating lib/rte_ipsec_mingw with a custom command 00:01:47.485 [245/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:47.485 [246/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:47.485 [247/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:47.485 [248/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:47.485 [249/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:47.485 [250/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:47.485 [251/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:47.485 [252/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:47.485 [253/740] Generating lib/rte_fib_def with a custom command 00:01:47.485 [254/740] Generating lib/rte_fib_mingw with a custom command 00:01:47.485 [255/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:47.485 [256/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:47.485 [257/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:47.485 [258/740] Linking static target lib/librte_stack.a 00:01:47.485 [259/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:47.485 [260/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:47.485 [261/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:47.485 [262/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:47.485 [263/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:47.485 [264/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:47.485 [265/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:47.485 [266/740] Generating lib/rte_port_mingw with a custom command 00:01:47.485 [267/740] Linking static target lib/librte_compressdev.a 00:01:47.485 [268/740] Generating lib/rte_pdump_mingw with a custom command 00:01:47.485 [269/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:47.486 [270/740] Generating lib/rte_pdump_def with a custom command 00:01:47.486 [271/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:47.486 [272/740] Generating lib/rte_port_def with a custom command 00:01:47.486 [273/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:47.486 [274/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:47.486 [275/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:47.486 [276/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:47.486 [277/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:47.486 [278/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:47.486 [279/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.486 [280/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:47.486 [281/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.747 [282/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:47.747 [283/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:47.747 [284/740] Linking static target lib/librte_rcu.a 00:01:47.747 [285/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:47.747 [286/740] Linking static target lib/librte_rawdev.a 00:01:47.747 [287/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.747 [288/740] Linking static target lib/librte_mempool.a 00:01:47.747 [289/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:47.747 [290/740] Generating lib/rte_table_def with a custom command 00:01:47.747 [291/740] Generating lib/rte_table_mingw with a custom command 00:01:47.747 [292/740] Linking static target lib/librte_bbdev.a 00:01:47.747 [293/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:47.747 [294/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:47.747 [295/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.747 [296/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:47.747 [297/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:47.747 [298/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:47.747 [299/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:47.747 [300/740] Linking static target lib/librte_gro.a 00:01:47.747 [301/740] Linking static target lib/librte_gpudev.a 00:01:47.747 [302/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.747 [303/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:47.747 [304/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:47.747 [305/740] Linking static target lib/librte_dmadev.a 00:01:47.747 [306/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:47.747 [307/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:47.747 [308/740] Generating lib/rte_pipeline_mingw with a custom command 00:01:47.747 [309/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.747 [310/740] Generating lib/rte_pipeline_def with a custom command 00:01:47.747 [311/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.747 [312/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:47.747 [313/740] Generating lib/rte_graph_def with a custom command 00:01:47.747 [314/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.747 [315/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:47.747 [316/740] Linking static target lib/librte_latencystats.a 00:01:47.747 [317/740] Linking target lib/librte_telemetry.so.23.0 00:01:47.747 [318/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:47.747 [319/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:47.747 [320/740] Linking static target lib/librte_gso.a 00:01:47.747 [321/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:47.747 [322/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:47.747 [323/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:47.747 [324/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:47.747 [325/740] Generating lib/rte_graph_mingw with a custom command 00:01:47.747 [326/740] Linking static target lib/librte_distributor.a 00:01:48.090 [327/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:48.090 [328/740] Linking static target lib/librte_ip_frag.a 00:01:48.090 [329/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:48.090 [330/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.090 [331/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:48.090 [332/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:48.090 [333/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:48.090 [334/740] Linking static target lib/librte_regexdev.a 00:01:48.090 [335/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:48.090 [336/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:48.090 [337/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:48.090 [338/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:48.090 [339/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.090 [340/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:48.090 [341/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:48.090 [342/740] Generating lib/rte_node_def with a custom command 00:01:48.090 [343/740] Generating lib/rte_node_mingw with a custom command 00:01:48.090 [344/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:48.090 [345/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.090 [346/740] Generating drivers/rte_bus_pci_def with a custom command 00:01:48.090 [347/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:48.090 [348/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.090 [349/740] Linking static target lib/librte_eal.a 00:01:48.090 [350/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.090 [351/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:48.090 [352/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.090 [353/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.090 [354/740] Linking static target lib/librte_power.a 00:01:48.090 [355/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:48.090 [356/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.090 [357/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:48.090 [358/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:48.090 [359/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.090 [360/740] Generating drivers/rte_bus_vdev_def with a custom command 00:01:48.090 [361/740] Linking static target lib/librte_reorder.a 00:01:48.090 [362/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:48.090 [363/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:48.090 [364/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:48.378 [365/740] Generating drivers/rte_mempool_ring_def with a custom command 00:01:48.378 [366/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:48.378 [367/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:48.378 [368/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:48.378 [369/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:48.378 [370/740] Linking static target lib/librte_pcapng.a 00:01:48.378 [371/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:48.378 [372/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:48.378 [373/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:48.378 [374/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:48.378 [375/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:48.379 [376/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:48.379 [377/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:48.379 [378/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.379 [379/740] Linking static target lib/librte_security.a 00:01:48.379 [380/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.379 [381/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:48.379 [382/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:48.379 [383/740] Linking static target lib/librte_mbuf.a 00:01:48.379 [384/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:48.379 [385/740] Linking static target lib/librte_bpf.a 00:01:48.379 [386/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:48.379 [387/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:48.379 [388/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.379 [389/740] Generating drivers/rte_net_i40e_def with a custom command 00:01:48.379 [390/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:48.379 [391/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:48.379 [392/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.379 [393/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:48.379 [394/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.379 [395/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:48.379 [396/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:48.379 [397/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:48.379 [398/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.379 [399/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:48.379 [400/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.379 [401/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:48.379 [402/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:48.379 [403/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:48.379 [404/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:48.379 [405/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:48.643 [406/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:48.643 [407/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:48.643 [408/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:48.643 [409/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:48.643 [410/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:48.643 [411/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:48.643 [412/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.643 [413/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:48.643 [414/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:48.643 [415/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:48.643 [416/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:48.643 [417/740] Linking static target lib/librte_rib.a 00:01:48.643 [418/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:48.643 [419/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:48.643 [420/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:48.643 [421/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:48.643 [422/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:48.643 [423/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.643 [424/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:48.643 [425/740] Linking static target lib/librte_lpm.a 00:01:48.643 [426/740] Linking static target lib/librte_graph.a 00:01:48.643 [427/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.643 [428/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:48.643 [429/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:48.643 [430/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:48.643 [431/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:48.643 [432/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:48.643 [433/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:48.643 [434/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.643 [435/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:48.643 [436/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.643 [437/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:48.643 [438/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:48.643 [439/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:48.643 [440/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:48.905 [441/740] Linking static target lib/librte_efd.a 00:01:48.905 [442/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:48.905 [443/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:48.905 [444/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.905 [445/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:48.905 [446/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.905 [447/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.905 [448/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:48.905 [449/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.905 [450/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:48.905 [451/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.905 [452/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.905 [453/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:48.905 [454/740] Linking static target drivers/librte_bus_vdev.a 00:01:48.905 [455/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:48.905 [456/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:48.905 [457/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:48.905 [458/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:48.905 [459/740] Linking static target lib/librte_fib.a 00:01:48.905 [460/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:49.169 [461/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.169 [462/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.169 [463/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:49.169 [464/740] Linking static target lib/librte_pdump.a 00:01:49.169 [465/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:49.169 [466/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.169 [467/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:49.169 [468/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:49.169 [469/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.169 [470/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:49.169 [471/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.169 [472/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.169 [473/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:49.169 [474/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:49.169 [475/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.169 [476/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:49.169 [477/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:49.169 [478/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:49.169 [479/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:49.169 [480/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:49.429 [481/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.429 [482/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.429 [483/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:49.429 [484/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.429 [485/740] Linking static target drivers/librte_bus_pci.a 00:01:49.429 [486/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:49.429 [487/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:49.429 [488/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.429 [489/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:49.429 [490/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:49.429 [491/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:49.429 [492/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:49.429 [493/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:49.429 [494/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:49.429 [495/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:49.429 [496/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:49.429 [497/740] Linking static target lib/librte_table.a 00:01:49.429 [498/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:49.429 [499/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:49.429 [500/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:49.429 [501/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:49.429 [502/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:49.689 [503/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:49.689 [504/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.689 [505/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:49.689 [506/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:49.689 [507/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.689 [508/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:49.689 [509/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:49.689 [510/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:49.689 [511/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:49.689 [512/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:49.689 [513/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:49.689 [514/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:49.689 [515/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:49.689 [516/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:49.689 [517/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:49.689 [518/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.689 [519/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:49.689 [520/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:49.689 [521/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:49.689 [522/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:49.689 [523/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:49.689 [524/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:49.689 [525/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:49.689 [526/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:49.689 [527/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:49.689 [528/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:49.689 [529/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:49.689 [530/740] Linking static target lib/librte_sched.a 00:01:49.689 [531/740] Linking static target lib/librte_cryptodev.a 00:01:49.948 [532/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.948 [533/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:49.948 [534/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:49.948 [535/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:49.949 [536/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:49.949 [537/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:49.949 [538/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:49.949 [539/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:49.949 [540/740] Linking static target lib/librte_ipsec.a 00:01:49.949 [541/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:49.949 [542/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:49.949 [543/740] Linking static target lib/librte_node.a 00:01:49.949 [544/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:49.949 [545/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:49.949 [546/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.949 [547/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:49.949 [548/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:49.949 [549/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.949 [550/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.949 [551/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:49.949 [552/740] Linking static target drivers/librte_mempool_ring.a 00:01:49.949 [553/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:49.949 [554/740] Linking static target lib/librte_ethdev.a 00:01:49.949 [555/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:49.949 [556/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:49.949 [557/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:49.949 [558/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:49.949 [559/740] Linking static target lib/librte_member.a 00:01:49.949 [560/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:50.208 [561/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:50.208 [562/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:50.208 [563/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:50.208 [564/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:50.208 [565/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:50.208 [566/740] Linking static target lib/librte_port.a 00:01:50.208 [567/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:50.208 [568/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:50.208 [569/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:50.208 [570/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:50.208 [571/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.208 [572/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:50.208 [573/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:50.208 [574/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:50.208 [575/740] Linking static target lib/librte_eventdev.a 00:01:50.208 [576/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:50.208 [577/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:50.208 [578/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:50.208 [579/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:50.208 [580/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:50.208 [581/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:50.208 [582/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:50.208 [583/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:50.208 [584/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:50.208 [585/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.208 [586/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:50.208 [587/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:50.467 [588/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:50.467 [589/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:50.467 [590/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.467 [591/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:50.467 [592/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.467 [593/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:50.467 [594/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:50.467 [595/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:50.467 [596/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.467 [597/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:50.467 [598/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:50.467 [599/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:50.467 [600/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:50.467 [601/740] Linking static target lib/librte_hash.a 00:01:50.467 [602/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:01:50.726 [603/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:50.726 [604/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:50.726 [605/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:50.726 [606/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:50.726 [607/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:50.726 [608/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:01:50.726 [609/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:50.726 [610/740] Linking static target lib/librte_acl.a 00:01:50.985 [611/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:50.985 [612/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.985 [613/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:51.244 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:51.244 [615/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:51.244 [616/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.503 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:51.760 [618/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.760 [619/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:51.760 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:52.018 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:52.583 [622/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:52.583 [623/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:52.841 [624/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:52.841 [625/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:52.841 [626/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:52.841 [627/740] Linking static target drivers/librte_net_i40e.a 00:01:53.099 [628/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:53.357 [629/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.616 [630/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:53.616 [631/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.616 [632/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:54.182 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.443 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.700 [635/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:59.700 [636/740] Linking static target lib/librte_vhost.a 00:02:00.635 [637/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:00.635 [638/740] Linking static target lib/librte_pipeline.a 00:02:00.893 [639/740] Linking target app/dpdk-test-cmdline 00:02:00.893 [640/740] Linking target app/dpdk-pdump 00:02:00.893 [641/740] Linking target app/dpdk-test-acl 00:02:00.893 [642/740] Linking target app/dpdk-test-fib 00:02:01.151 [643/740] Linking target app/dpdk-test-flow-perf 00:02:01.151 [644/740] Linking target app/dpdk-test-crypto-perf 00:02:01.151 [645/740] Linking target app/dpdk-test-bbdev 00:02:01.151 [646/740] Linking target app/dpdk-test-gpudev 00:02:01.151 [647/740] Linking target app/dpdk-test-compress-perf 00:02:01.151 [648/740] Linking target app/dpdk-dumpcap 00:02:01.151 [649/740] Linking target app/dpdk-test-regex 00:02:01.151 [650/740] Linking target app/dpdk-test-security-perf 00:02:01.151 [651/740] Linking target app/dpdk-test-eventdev 00:02:01.151 [652/740] Linking target app/dpdk-test-sad 00:02:01.151 [653/740] Linking target app/dpdk-proc-info 00:02:01.151 [654/740] Linking target app/dpdk-test-pipeline 00:02:01.151 [655/740] Linking target app/dpdk-testpmd 00:02:01.718 [656/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.976 [657/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.976 [658/740] Linking target lib/librte_eal.so.23.0 00:02:01.976 [659/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:02.235 [660/740] Linking target lib/librte_timer.so.23.0 00:02:02.235 [661/740] Linking target lib/librte_pci.so.23.0 00:02:02.235 [662/740] Linking target lib/librte_meter.so.23.0 00:02:02.235 [663/740] Linking target lib/librte_ring.so.23.0 00:02:02.235 [664/740] Linking target lib/librte_rawdev.so.23.0 00:02:02.235 [665/740] Linking target lib/librte_cfgfile.so.23.0 00:02:02.235 [666/740] Linking target lib/librte_stack.so.23.0 00:02:02.235 [667/740] Linking target lib/librte_dmadev.so.23.0 00:02:02.235 [668/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:02.235 [669/740] Linking target lib/librte_jobstats.so.23.0 00:02:02.235 [670/740] Linking target lib/librte_graph.so.23.0 00:02:02.235 [671/740] Linking target lib/librte_acl.so.23.0 00:02:02.235 [672/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:02.235 [673/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:02.235 [674/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:02.235 [675/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:02.235 [676/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:02.235 [677/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:02.235 [678/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:02.235 [679/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:02.235 [680/740] Linking target lib/librte_rcu.so.23.0 00:02:02.235 [681/740] Linking target lib/librte_mempool.so.23.0 00:02:02.235 [682/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:02.493 [683/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:02.493 [684/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:02.493 [685/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:02.493 [686/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:02.493 [687/740] Linking target lib/librte_mbuf.so.23.0 00:02:02.493 [688/740] Linking target lib/librte_rib.so.23.0 00:02:02.751 [689/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:02.751 [690/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:02.751 [691/740] Linking target lib/librte_reorder.so.23.0 00:02:02.751 [692/740] Linking target lib/librte_fib.so.23.0 00:02:02.751 [693/740] Linking target lib/librte_distributor.so.23.0 00:02:02.751 [694/740] Linking target lib/librte_bbdev.so.23.0 00:02:02.751 [695/740] Linking target lib/librte_regexdev.so.23.0 00:02:02.751 [696/740] Linking target lib/librte_net.so.23.0 00:02:02.751 [697/740] Linking target lib/librte_compressdev.so.23.0 00:02:02.751 [698/740] Linking target lib/librte_gpudev.so.23.0 00:02:02.751 [699/740] Linking target lib/librte_sched.so.23.0 00:02:02.751 [700/740] Linking target lib/librte_cryptodev.so.23.0 00:02:02.751 [701/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:02.751 [702/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:02.751 [703/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:03.008 [704/740] Linking target lib/librte_cmdline.so.23.0 00:02:03.008 [705/740] Linking target lib/librte_hash.so.23.0 00:02:03.008 [706/740] Linking target lib/librte_security.so.23.0 00:02:03.008 [707/740] Linking target lib/librte_ethdev.so.23.0 00:02:03.008 [708/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:03.008 [709/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:03.008 [710/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:03.008 [711/740] Linking target lib/librte_efd.so.23.0 00:02:03.008 [712/740] Linking target lib/librte_lpm.so.23.0 00:02:03.008 [713/740] Linking target lib/librte_metrics.so.23.0 00:02:03.008 [714/740] Linking target lib/librte_member.so.23.0 00:02:03.008 [715/740] Linking target lib/librte_ipsec.so.23.0 00:02:03.008 [716/740] Linking target lib/librte_bpf.so.23.0 00:02:03.009 [717/740] Linking target lib/librte_gso.so.23.0 00:02:03.009 [718/740] Linking target lib/librte_pcapng.so.23.0 00:02:03.009 [719/740] Linking target lib/librte_gro.so.23.0 00:02:03.009 [720/740] Linking target lib/librte_ip_frag.so.23.0 00:02:03.009 [721/740] Linking target lib/librte_power.so.23.0 00:02:03.009 [722/740] Linking target lib/librte_eventdev.so.23.0 00:02:03.265 [723/740] Linking target lib/librte_vhost.so.23.0 00:02:03.265 [724/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:03.265 [725/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:03.265 [726/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:03.265 [727/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:03.265 [728/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:03.265 [729/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:03.265 [730/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:03.265 [731/740] Linking target lib/librte_latencystats.so.23.0 00:02:03.265 [732/740] Linking target lib/librte_node.so.23.0 00:02:03.265 [733/740] Linking target lib/librte_bitratestats.so.23.0 00:02:03.265 [734/740] Linking target lib/librte_pdump.so.23.0 00:02:03.265 [735/740] Linking target lib/librte_port.so.23.0 00:02:03.523 [736/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:03.523 [737/740] Linking target lib/librte_table.so.23.0 00:02:03.523 [738/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:06.054 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.054 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:06.054 21:31:37 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:06.054 21:31:37 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:06.054 21:31:37 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:02:06.054 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:06.054 [0/1] Installing files. 00:02:06.054 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.054 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.055 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:06.056 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.057 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.058 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:06.059 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:06.060 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.060 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.060 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.060 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.060 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.060 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:06.060 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:06.060 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:06.060 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:06.060 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:06.060 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:06.060 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.060 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:06.323 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:06.323 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:06.323 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.323 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:06.323 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.323 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.323 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.323 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.323 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.323 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.323 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.323 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.323 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.323 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.323 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.323 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.323 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.323 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.324 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.324 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.324 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.324 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:06.325 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:06.325 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:06.325 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:06.325 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:06.325 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:06.325 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:06.325 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:06.325 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:06.325 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:06.325 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:06.325 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:06.325 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:06.325 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:06.325 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:06.325 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:06.325 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:06.325 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:02:06.325 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:06.325 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:06.325 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:06.326 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:06.326 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:06.326 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:06.326 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:06.326 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:06.326 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:06.326 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:06.326 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:06.326 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:06.326 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:06.326 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:06.326 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:06.326 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:06.326 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:06.326 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:06.326 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:06.326 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:06.326 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:06.326 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:06.326 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:06.326 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:06.326 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:06.326 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:06.326 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:06.326 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:06.326 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:06.326 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:06.326 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:06.326 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:06.326 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:06.326 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:06.326 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:06.326 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:06.326 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:06.326 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:06.326 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:06.326 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:06.326 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:06.326 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:06.326 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:06.326 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:06.326 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:06.326 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:06.326 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:06.326 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:06.326 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:06.326 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:02:06.326 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:06.326 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:06.326 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:06.326 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:02:06.326 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:06.326 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:06.326 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:06.326 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:06.326 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:06.326 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:06.326 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:06.326 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:06.326 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:06.326 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:06.326 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:06.326 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:06.326 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:06.326 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:02:06.326 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:06.326 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:06.326 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:06.326 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:06.326 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:06.326 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:06.326 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:06.326 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:06.326 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:06.326 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:02:06.326 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:06.326 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:06.326 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:06.326 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:06.326 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:06.326 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:06.326 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:06.326 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:06.326 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:06.326 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:06.326 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:06.326 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:06.326 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:06.326 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:06.326 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:06.326 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:02:06.326 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:06.326 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:06.326 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:06.326 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:06.326 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:06.326 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:02:06.326 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:06.326 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:06.326 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:06.326 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:06.326 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:06.326 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:06.326 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:06.326 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:06.326 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:06.326 21:31:38 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:06.326 21:31:38 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:06.326 00:02:06.326 real 0m26.286s 00:02:06.326 user 6m37.561s 00:02:06.326 sys 2m12.311s 00:02:06.584 21:31:38 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:06.584 21:31:38 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:06.584 ************************************ 00:02:06.584 END TEST build_native_dpdk 00:02:06.584 ************************************ 00:02:06.584 21:31:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:06.584 21:31:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:06.584 21:31:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:06.584 21:31:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:06.584 21:31:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:06.584 21:31:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:06.584 21:31:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:06.584 21:31:38 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:02:06.584 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:06.842 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:06.842 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:06.842 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:07.100 Using 'verbs' RDMA provider 00:02:20.232 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:35.106 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:35.106 Creating mk/config.mk...done. 00:02:35.106 Creating mk/cc.flags.mk...done. 00:02:35.106 Type 'make' to build. 00:02:35.106 21:32:05 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:35.106 21:32:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:35.106 21:32:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:35.106 21:32:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.106 ************************************ 00:02:35.106 START TEST make 00:02:35.106 ************************************ 00:02:35.106 21:32:05 make -- common/autotest_common.sh@1125 -- $ make -j112 00:02:35.106 make[1]: Nothing to be done for 'all'. 00:03:07.289 CC lib/ut/ut.o 00:03:07.289 CC lib/ut_mock/mock.o 00:03:07.289 CC lib/log/log.o 00:03:07.289 CC lib/log/log_flags.o 00:03:07.289 CC lib/log/log_deprecated.o 00:03:07.289 LIB libspdk_ut.a 00:03:07.289 LIB libspdk_ut_mock.a 00:03:07.289 LIB libspdk_log.a 00:03:07.289 SO libspdk_ut.so.2.0 00:03:07.289 SO libspdk_ut_mock.so.6.0 00:03:07.289 SO libspdk_log.so.7.0 00:03:07.289 SYMLINK libspdk_ut.so 00:03:07.289 SYMLINK libspdk_ut_mock.so 00:03:07.289 SYMLINK libspdk_log.so 00:03:07.289 CC lib/dma/dma.o 00:03:07.289 CC lib/ioat/ioat.o 00:03:07.289 CC lib/util/base64.o 00:03:07.289 CC lib/util/bit_array.o 00:03:07.289 CC lib/util/cpuset.o 00:03:07.289 CXX lib/trace_parser/trace.o 00:03:07.289 CC lib/util/crc16.o 00:03:07.289 CC lib/util/crc32.o 00:03:07.289 CC lib/util/crc32c.o 00:03:07.289 CC lib/util/crc32_ieee.o 00:03:07.289 CC lib/util/crc64.o 00:03:07.289 CC lib/util/dif.o 00:03:07.289 CC lib/util/fd.o 00:03:07.289 CC lib/util/fd_group.o 00:03:07.289 CC lib/util/file.o 00:03:07.289 CC lib/util/hexlify.o 00:03:07.289 CC lib/util/iov.o 00:03:07.289 CC lib/util/math.o 00:03:07.289 CC lib/util/net.o 00:03:07.289 CC lib/util/pipe.o 00:03:07.289 CC lib/util/strerror_tls.o 00:03:07.289 CC lib/util/xor.o 00:03:07.289 CC lib/util/string.o 00:03:07.289 CC lib/util/uuid.o 00:03:07.289 CC lib/util/zipf.o 00:03:07.289 CC lib/util/md5.o 00:03:07.289 CC lib/vfio_user/host/vfio_user_pci.o 00:03:07.289 CC lib/vfio_user/host/vfio_user.o 00:03:07.289 LIB libspdk_dma.a 00:03:07.289 SO libspdk_dma.so.5.0 00:03:07.289 LIB libspdk_ioat.a 00:03:07.289 SO libspdk_ioat.so.7.0 00:03:07.289 SYMLINK libspdk_dma.so 00:03:07.289 SYMLINK libspdk_ioat.so 00:03:07.289 LIB libspdk_vfio_user.a 00:03:07.289 SO libspdk_vfio_user.so.5.0 00:03:07.289 LIB libspdk_util.a 00:03:07.289 SYMLINK libspdk_vfio_user.so 00:03:07.289 SO libspdk_util.so.10.0 00:03:07.289 SYMLINK libspdk_util.so 00:03:07.289 LIB libspdk_trace_parser.a 00:03:07.289 SO libspdk_trace_parser.so.6.0 00:03:07.289 SYMLINK libspdk_trace_parser.so 00:03:07.289 CC lib/env_dpdk/env.o 00:03:07.289 CC lib/env_dpdk/memory.o 00:03:07.289 CC lib/env_dpdk/pci.o 00:03:07.289 CC lib/env_dpdk/init.o 00:03:07.289 CC lib/env_dpdk/threads.o 00:03:07.289 CC lib/env_dpdk/pci_ioat.o 00:03:07.289 CC lib/env_dpdk/pci_virtio.o 00:03:07.289 CC lib/vmd/vmd.o 00:03:07.289 CC lib/env_dpdk/pci_event.o 00:03:07.289 CC lib/env_dpdk/pci_vmd.o 00:03:07.289 CC lib/vmd/led.o 00:03:07.289 CC lib/env_dpdk/pci_idxd.o 00:03:07.289 CC lib/env_dpdk/pci_dpdk.o 00:03:07.289 CC lib/env_dpdk/sigbus_handler.o 00:03:07.289 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:07.289 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:07.289 CC lib/rdma_provider/common.o 00:03:07.289 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:07.289 CC lib/idxd/idxd.o 00:03:07.289 CC lib/idxd/idxd_user.o 00:03:07.289 CC lib/rdma_utils/rdma_utils.o 00:03:07.289 CC lib/idxd/idxd_kernel.o 00:03:07.289 CC lib/json/json_parse.o 00:03:07.289 CC lib/json/json_write.o 00:03:07.289 CC lib/json/json_util.o 00:03:07.289 CC lib/conf/conf.o 00:03:07.289 LIB libspdk_rdma_provider.a 00:03:07.289 SO libspdk_rdma_provider.so.6.0 00:03:07.289 LIB libspdk_conf.a 00:03:07.289 LIB libspdk_rdma_utils.a 00:03:07.289 SO libspdk_conf.so.6.0 00:03:07.289 SO libspdk_rdma_utils.so.1.0 00:03:07.289 LIB libspdk_json.a 00:03:07.289 SYMLINK libspdk_rdma_provider.so 00:03:07.289 SO libspdk_json.so.6.0 00:03:07.289 SYMLINK libspdk_rdma_utils.so 00:03:07.289 SYMLINK libspdk_conf.so 00:03:07.289 SYMLINK libspdk_json.so 00:03:07.289 LIB libspdk_idxd.a 00:03:07.289 LIB libspdk_vmd.a 00:03:07.289 SO libspdk_idxd.so.12.1 00:03:07.289 SO libspdk_vmd.so.6.0 00:03:07.289 SYMLINK libspdk_idxd.so 00:03:07.289 SYMLINK libspdk_vmd.so 00:03:07.289 CC lib/jsonrpc/jsonrpc_server.o 00:03:07.289 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:07.289 CC lib/jsonrpc/jsonrpc_client.o 00:03:07.289 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:07.289 LIB libspdk_jsonrpc.a 00:03:07.289 SO libspdk_jsonrpc.so.6.0 00:03:07.289 LIB libspdk_env_dpdk.a 00:03:07.289 SYMLINK libspdk_jsonrpc.so 00:03:07.289 SO libspdk_env_dpdk.so.15.0 00:03:07.289 SYMLINK libspdk_env_dpdk.so 00:03:07.289 CC lib/rpc/rpc.o 00:03:07.289 LIB libspdk_rpc.a 00:03:07.289 SO libspdk_rpc.so.6.0 00:03:07.289 SYMLINK libspdk_rpc.so 00:03:07.289 CC lib/notify/notify.o 00:03:07.289 CC lib/notify/notify_rpc.o 00:03:07.289 CC lib/trace/trace.o 00:03:07.289 CC lib/trace/trace_flags.o 00:03:07.289 CC lib/trace/trace_rpc.o 00:03:07.289 CC lib/keyring/keyring_rpc.o 00:03:07.289 CC lib/keyring/keyring.o 00:03:07.289 LIB libspdk_notify.a 00:03:07.289 SO libspdk_notify.so.6.0 00:03:07.289 LIB libspdk_keyring.a 00:03:07.289 SYMLINK libspdk_notify.so 00:03:07.289 LIB libspdk_trace.a 00:03:07.289 SO libspdk_keyring.so.2.0 00:03:07.289 SO libspdk_trace.so.11.0 00:03:07.289 SYMLINK libspdk_keyring.so 00:03:07.289 SYMLINK libspdk_trace.so 00:03:07.289 CC lib/thread/thread.o 00:03:07.289 CC lib/thread/iobuf.o 00:03:07.289 CC lib/sock/sock.o 00:03:07.290 CC lib/sock/sock_rpc.o 00:03:07.290 LIB libspdk_sock.a 00:03:07.290 SO libspdk_sock.so.10.0 00:03:07.290 SYMLINK libspdk_sock.so 00:03:07.290 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:07.290 CC lib/nvme/nvme_ctrlr.o 00:03:07.290 CC lib/nvme/nvme_fabric.o 00:03:07.290 CC lib/nvme/nvme_ns_cmd.o 00:03:07.290 CC lib/nvme/nvme_ns.o 00:03:07.290 CC lib/nvme/nvme_pcie_common.o 00:03:07.290 CC lib/nvme/nvme_pcie.o 00:03:07.290 CC lib/nvme/nvme_qpair.o 00:03:07.290 CC lib/nvme/nvme.o 00:03:07.290 CC lib/nvme/nvme_quirks.o 00:03:07.290 CC lib/nvme/nvme_transport.o 00:03:07.290 CC lib/nvme/nvme_discovery.o 00:03:07.290 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:07.290 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:07.290 CC lib/nvme/nvme_tcp.o 00:03:07.290 CC lib/nvme/nvme_opal.o 00:03:07.290 CC lib/nvme/nvme_stubs.o 00:03:07.290 CC lib/nvme/nvme_io_msg.o 00:03:07.290 CC lib/nvme/nvme_poll_group.o 00:03:07.290 CC lib/nvme/nvme_zns.o 00:03:07.290 CC lib/nvme/nvme_rdma.o 00:03:07.290 CC lib/nvme/nvme_auth.o 00:03:07.290 CC lib/nvme/nvme_cuse.o 00:03:07.290 LIB libspdk_thread.a 00:03:07.290 SO libspdk_thread.so.10.1 00:03:07.549 SYMLINK libspdk_thread.so 00:03:07.808 CC lib/init/subsystem.o 00:03:07.808 CC lib/accel/accel_rpc.o 00:03:07.808 CC lib/accel/accel.o 00:03:07.808 CC lib/init/json_config.o 00:03:07.808 CC lib/blob/blobstore.o 00:03:07.808 CC lib/accel/accel_sw.o 00:03:07.808 CC lib/init/subsystem_rpc.o 00:03:07.808 CC lib/blob/request.o 00:03:07.808 CC lib/init/rpc.o 00:03:07.808 CC lib/blob/zeroes.o 00:03:07.808 CC lib/blob/blob_bs_dev.o 00:03:07.808 CC lib/virtio/virtio.o 00:03:07.808 CC lib/virtio/virtio_vhost_user.o 00:03:07.808 CC lib/virtio/virtio_vfio_user.o 00:03:07.808 CC lib/virtio/virtio_pci.o 00:03:07.808 CC lib/fsdev/fsdev_rpc.o 00:03:07.808 CC lib/fsdev/fsdev.o 00:03:07.808 CC lib/fsdev/fsdev_io.o 00:03:08.066 LIB libspdk_init.a 00:03:08.066 SO libspdk_init.so.6.0 00:03:08.066 LIB libspdk_virtio.a 00:03:08.066 SO libspdk_virtio.so.7.0 00:03:08.066 SYMLINK libspdk_init.so 00:03:08.067 SYMLINK libspdk_virtio.so 00:03:08.326 LIB libspdk_fsdev.a 00:03:08.326 SO libspdk_fsdev.so.1.0 00:03:08.326 SYMLINK libspdk_fsdev.so 00:03:08.326 CC lib/event/app.o 00:03:08.326 CC lib/event/reactor.o 00:03:08.326 CC lib/event/log_rpc.o 00:03:08.326 CC lib/event/app_rpc.o 00:03:08.326 CC lib/event/scheduler_static.o 00:03:08.585 LIB libspdk_accel.a 00:03:08.585 SO libspdk_accel.so.16.0 00:03:08.585 LIB libspdk_nvme.a 00:03:08.585 SYMLINK libspdk_accel.so 00:03:08.845 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:08.845 LIB libspdk_event.a 00:03:08.845 SO libspdk_nvme.so.14.0 00:03:08.845 SO libspdk_event.so.14.0 00:03:08.845 SYMLINK libspdk_event.so 00:03:09.104 SYMLINK libspdk_nvme.so 00:03:09.104 CC lib/bdev/bdev.o 00:03:09.104 CC lib/bdev/bdev_rpc.o 00:03:09.104 CC lib/bdev/bdev_zone.o 00:03:09.104 CC lib/bdev/part.o 00:03:09.104 CC lib/bdev/scsi_nvme.o 00:03:09.104 LIB libspdk_fuse_dispatcher.a 00:03:09.364 SO libspdk_fuse_dispatcher.so.1.0 00:03:09.364 SYMLINK libspdk_fuse_dispatcher.so 00:03:09.931 LIB libspdk_blob.a 00:03:09.931 SO libspdk_blob.so.11.0 00:03:09.931 SYMLINK libspdk_blob.so 00:03:10.499 CC lib/lvol/lvol.o 00:03:10.499 CC lib/blobfs/blobfs.o 00:03:10.499 CC lib/blobfs/tree.o 00:03:10.758 LIB libspdk_bdev.a 00:03:11.016 SO libspdk_bdev.so.16.0 00:03:11.016 LIB libspdk_blobfs.a 00:03:11.016 SO libspdk_blobfs.so.10.0 00:03:11.016 SYMLINK libspdk_bdev.so 00:03:11.016 LIB libspdk_lvol.a 00:03:11.016 SYMLINK libspdk_blobfs.so 00:03:11.016 SO libspdk_lvol.so.10.0 00:03:11.275 SYMLINK libspdk_lvol.so 00:03:11.275 CC lib/nvmf/ctrlr.o 00:03:11.275 CC lib/nvmf/ctrlr_discovery.o 00:03:11.275 CC lib/scsi/dev.o 00:03:11.275 CC lib/scsi/port.o 00:03:11.275 CC lib/nvmf/ctrlr_bdev.o 00:03:11.275 CC lib/scsi/lun.o 00:03:11.275 CC lib/nvmf/subsystem.o 00:03:11.275 CC lib/nvmf/nvmf.o 00:03:11.275 CC lib/scsi/scsi.o 00:03:11.275 CC lib/scsi/scsi_bdev.o 00:03:11.276 CC lib/nvmf/nvmf_rpc.o 00:03:11.276 CC lib/nvmf/tcp.o 00:03:11.276 CC lib/nvmf/transport.o 00:03:11.276 CC lib/scsi/scsi_pr.o 00:03:11.276 CC lib/scsi/scsi_rpc.o 00:03:11.276 CC lib/nvmf/stubs.o 00:03:11.276 CC lib/scsi/task.o 00:03:11.276 CC lib/nvmf/mdns_server.o 00:03:11.276 CC lib/nvmf/rdma.o 00:03:11.276 CC lib/ublk/ublk.o 00:03:11.276 CC lib/nvmf/auth.o 00:03:11.276 CC lib/nbd/nbd.o 00:03:11.276 CC lib/ublk/ublk_rpc.o 00:03:11.276 CC lib/nbd/nbd_rpc.o 00:03:11.276 CC lib/ftl/ftl_layout.o 00:03:11.276 CC lib/ftl/ftl_core.o 00:03:11.276 CC lib/ftl/ftl_init.o 00:03:11.276 CC lib/ftl/ftl_debug.o 00:03:11.276 CC lib/ftl/ftl_io.o 00:03:11.276 CC lib/ftl/ftl_sb.o 00:03:11.276 CC lib/ftl/ftl_l2p.o 00:03:11.276 CC lib/ftl/ftl_l2p_flat.o 00:03:11.276 CC lib/ftl/ftl_nv_cache.o 00:03:11.276 CC lib/ftl/ftl_band.o 00:03:11.276 CC lib/ftl/ftl_band_ops.o 00:03:11.534 CC lib/ftl/ftl_writer.o 00:03:11.534 CC lib/ftl/ftl_rq.o 00:03:11.534 CC lib/ftl/ftl_reloc.o 00:03:11.534 CC lib/ftl/ftl_l2p_cache.o 00:03:11.534 CC lib/ftl/ftl_p2l.o 00:03:11.534 CC lib/ftl/ftl_p2l_log.o 00:03:11.535 CC lib/ftl/mngt/ftl_mngt.o 00:03:11.535 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:11.535 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:11.535 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:11.535 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:11.535 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:11.535 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:11.535 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:11.535 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:11.535 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:11.535 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:11.535 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:11.535 CC lib/ftl/utils/ftl_conf.o 00:03:11.535 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:11.535 CC lib/ftl/utils/ftl_md.o 00:03:11.535 CC lib/ftl/utils/ftl_mempool.o 00:03:11.535 CC lib/ftl/utils/ftl_bitmap.o 00:03:11.535 CC lib/ftl/utils/ftl_property.o 00:03:11.535 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:11.535 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:11.535 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:11.535 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:11.535 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:11.535 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:11.535 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:11.535 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:11.535 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:11.535 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:11.535 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:11.535 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:11.535 CC lib/ftl/base/ftl_base_dev.o 00:03:11.535 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:11.535 CC lib/ftl/base/ftl_base_bdev.o 00:03:11.535 CC lib/ftl/ftl_trace.o 00:03:12.101 LIB libspdk_nbd.a 00:03:12.101 SO libspdk_nbd.so.7.0 00:03:12.101 SYMLINK libspdk_nbd.so 00:03:12.101 LIB libspdk_ublk.a 00:03:12.101 LIB libspdk_scsi.a 00:03:12.101 SO libspdk_ublk.so.3.0 00:03:12.101 SO libspdk_scsi.so.9.0 00:03:12.101 SYMLINK libspdk_ublk.so 00:03:12.101 SYMLINK libspdk_scsi.so 00:03:12.360 LIB libspdk_ftl.a 00:03:12.360 SO libspdk_ftl.so.9.0 00:03:12.620 CC lib/iscsi/init_grp.o 00:03:12.620 CC lib/iscsi/conn.o 00:03:12.620 CC lib/iscsi/iscsi.o 00:03:12.620 CC lib/iscsi/param.o 00:03:12.620 CC lib/iscsi/portal_grp.o 00:03:12.620 CC lib/iscsi/tgt_node.o 00:03:12.620 CC lib/iscsi/iscsi_subsystem.o 00:03:12.620 CC lib/iscsi/iscsi_rpc.o 00:03:12.620 CC lib/iscsi/task.o 00:03:12.620 CC lib/vhost/vhost.o 00:03:12.620 CC lib/vhost/vhost_rpc.o 00:03:12.620 CC lib/vhost/vhost_scsi.o 00:03:12.620 CC lib/vhost/vhost_blk.o 00:03:12.620 CC lib/vhost/rte_vhost_user.o 00:03:12.620 SYMLINK libspdk_ftl.so 00:03:13.188 LIB libspdk_nvmf.a 00:03:13.188 SO libspdk_nvmf.so.19.0 00:03:13.188 SYMLINK libspdk_nvmf.so 00:03:13.188 LIB libspdk_vhost.a 00:03:13.446 SO libspdk_vhost.so.8.0 00:03:13.446 SYMLINK libspdk_vhost.so 00:03:13.446 LIB libspdk_iscsi.a 00:03:13.705 SO libspdk_iscsi.so.8.0 00:03:13.705 SYMLINK libspdk_iscsi.so 00:03:14.274 CC module/env_dpdk/env_dpdk_rpc.o 00:03:14.274 LIB libspdk_env_dpdk_rpc.a 00:03:14.274 CC module/accel/error/accel_error_rpc.o 00:03:14.532 CC module/accel/error/accel_error.o 00:03:14.532 CC module/keyring/linux/keyring.o 00:03:14.532 CC module/keyring/linux/keyring_rpc.o 00:03:14.532 CC module/accel/iaa/accel_iaa.o 00:03:14.532 CC module/accel/ioat/accel_ioat.o 00:03:14.532 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:14.532 CC module/accel/iaa/accel_iaa_rpc.o 00:03:14.532 CC module/accel/ioat/accel_ioat_rpc.o 00:03:14.532 CC module/sock/posix/posix.o 00:03:14.532 CC module/blob/bdev/blob_bdev.o 00:03:14.532 CC module/fsdev/aio/linux_aio_mgr.o 00:03:14.532 CC module/fsdev/aio/fsdev_aio.o 00:03:14.532 CC module/keyring/file/keyring.o 00:03:14.532 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:14.532 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:14.532 CC module/keyring/file/keyring_rpc.o 00:03:14.532 CC module/accel/dsa/accel_dsa.o 00:03:14.532 CC module/accel/dsa/accel_dsa_rpc.o 00:03:14.532 SO libspdk_env_dpdk_rpc.so.6.0 00:03:14.532 CC module/scheduler/gscheduler/gscheduler.o 00:03:14.532 SYMLINK libspdk_env_dpdk_rpc.so 00:03:14.532 LIB libspdk_keyring_linux.a 00:03:14.532 LIB libspdk_scheduler_dpdk_governor.a 00:03:14.532 LIB libspdk_accel_ioat.a 00:03:14.532 LIB libspdk_keyring_file.a 00:03:14.532 LIB libspdk_accel_error.a 00:03:14.532 SO libspdk_keyring_linux.so.1.0 00:03:14.532 LIB libspdk_scheduler_gscheduler.a 00:03:14.532 LIB libspdk_accel_iaa.a 00:03:14.532 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:14.532 SO libspdk_keyring_file.so.2.0 00:03:14.532 LIB libspdk_scheduler_dynamic.a 00:03:14.532 SO libspdk_accel_ioat.so.6.0 00:03:14.532 SO libspdk_accel_error.so.2.0 00:03:14.532 SO libspdk_scheduler_gscheduler.so.4.0 00:03:14.532 SO libspdk_accel_iaa.so.3.0 00:03:14.791 SYMLINK libspdk_keyring_linux.so 00:03:14.791 SO libspdk_scheduler_dynamic.so.4.0 00:03:14.791 LIB libspdk_blob_bdev.a 00:03:14.791 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:14.791 SYMLINK libspdk_keyring_file.so 00:03:14.791 SYMLINK libspdk_accel_error.so 00:03:14.791 SYMLINK libspdk_scheduler_gscheduler.so 00:03:14.791 LIB libspdk_accel_dsa.a 00:03:14.791 SYMLINK libspdk_accel_ioat.so 00:03:14.791 SYMLINK libspdk_accel_iaa.so 00:03:14.791 SO libspdk_blob_bdev.so.11.0 00:03:14.791 SYMLINK libspdk_scheduler_dynamic.so 00:03:14.791 SO libspdk_accel_dsa.so.5.0 00:03:14.791 SYMLINK libspdk_blob_bdev.so 00:03:14.791 SYMLINK libspdk_accel_dsa.so 00:03:14.791 LIB libspdk_fsdev_aio.a 00:03:15.050 SO libspdk_fsdev_aio.so.1.0 00:03:15.050 LIB libspdk_sock_posix.a 00:03:15.050 SYMLINK libspdk_fsdev_aio.so 00:03:15.050 SO libspdk_sock_posix.so.6.0 00:03:15.050 SYMLINK libspdk_sock_posix.so 00:03:15.308 CC module/bdev/error/vbdev_error.o 00:03:15.308 CC module/bdev/error/vbdev_error_rpc.o 00:03:15.308 CC module/blobfs/bdev/blobfs_bdev.o 00:03:15.308 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:15.308 CC module/bdev/delay/vbdev_delay.o 00:03:15.308 CC module/bdev/aio/bdev_aio.o 00:03:15.308 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.308 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:15.308 CC module/bdev/lvol/vbdev_lvol.o 00:03:15.308 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:15.308 CC module/bdev/malloc/bdev_malloc.o 00:03:15.308 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:15.308 CC module/bdev/gpt/gpt.o 00:03:15.308 CC module/bdev/gpt/vbdev_gpt.o 00:03:15.308 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:15.308 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:15.308 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:15.308 CC module/bdev/passthru/vbdev_passthru.o 00:03:15.308 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:15.308 CC module/bdev/nvme/bdev_nvme.o 00:03:15.308 CC module/bdev/null/bdev_null.o 00:03:15.308 CC module/bdev/null/bdev_null_rpc.o 00:03:15.308 CC module/bdev/raid/bdev_raid.o 00:03:15.308 CC module/bdev/nvme/nvme_rpc.o 00:03:15.308 CC module/bdev/raid/bdev_raid_rpc.o 00:03:15.308 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:15.308 CC module/bdev/raid/raid1.o 00:03:15.308 CC module/bdev/raid/bdev_raid_sb.o 00:03:15.308 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:15.308 CC module/bdev/nvme/bdev_mdns_client.o 00:03:15.308 CC module/bdev/nvme/vbdev_opal.o 00:03:15.308 CC module/bdev/raid/raid0.o 00:03:15.308 CC module/bdev/raid/concat.o 00:03:15.308 CC module/bdev/ftl/bdev_ftl.o 00:03:15.308 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:15.308 CC module/bdev/split/vbdev_split.o 00:03:15.308 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:15.308 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:15.308 CC module/bdev/split/vbdev_split_rpc.o 00:03:15.308 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:15.308 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:15.308 CC module/bdev/iscsi/bdev_iscsi.o 00:03:15.566 LIB libspdk_blobfs_bdev.a 00:03:15.566 SO libspdk_blobfs_bdev.so.6.0 00:03:15.566 LIB libspdk_bdev_error.a 00:03:15.566 LIB libspdk_bdev_gpt.a 00:03:15.566 LIB libspdk_bdev_split.a 00:03:15.566 SO libspdk_bdev_error.so.6.0 00:03:15.566 LIB libspdk_bdev_null.a 00:03:15.566 SYMLINK libspdk_blobfs_bdev.so 00:03:15.566 SO libspdk_bdev_gpt.so.6.0 00:03:15.566 SO libspdk_bdev_split.so.6.0 00:03:15.566 LIB libspdk_bdev_passthru.a 00:03:15.566 LIB libspdk_bdev_ftl.a 00:03:15.566 SO libspdk_bdev_null.so.6.0 00:03:15.566 LIB libspdk_bdev_aio.a 00:03:15.566 SYMLINK libspdk_bdev_error.so 00:03:15.566 LIB libspdk_bdev_delay.a 00:03:15.566 LIB libspdk_bdev_malloc.a 00:03:15.566 LIB libspdk_bdev_zone_block.a 00:03:15.566 SO libspdk_bdev_ftl.so.6.0 00:03:15.566 SO libspdk_bdev_passthru.so.6.0 00:03:15.566 SO libspdk_bdev_malloc.so.6.0 00:03:15.566 SYMLINK libspdk_bdev_gpt.so 00:03:15.566 SO libspdk_bdev_aio.so.6.0 00:03:15.566 SO libspdk_bdev_delay.so.6.0 00:03:15.566 SO libspdk_bdev_zone_block.so.6.0 00:03:15.566 SYMLINK libspdk_bdev_split.so 00:03:15.824 LIB libspdk_bdev_iscsi.a 00:03:15.824 SYMLINK libspdk_bdev_null.so 00:03:15.824 SYMLINK libspdk_bdev_ftl.so 00:03:15.824 SYMLINK libspdk_bdev_delay.so 00:03:15.824 SO libspdk_bdev_iscsi.so.6.0 00:03:15.824 SYMLINK libspdk_bdev_malloc.so 00:03:15.824 SYMLINK libspdk_bdev_passthru.so 00:03:15.824 SYMLINK libspdk_bdev_aio.so 00:03:15.824 SYMLINK libspdk_bdev_zone_block.so 00:03:15.824 LIB libspdk_bdev_lvol.a 00:03:15.824 LIB libspdk_bdev_virtio.a 00:03:15.824 SYMLINK libspdk_bdev_iscsi.so 00:03:15.824 SO libspdk_bdev_lvol.so.6.0 00:03:15.824 SO libspdk_bdev_virtio.so.6.0 00:03:15.824 SYMLINK libspdk_bdev_lvol.so 00:03:15.824 SYMLINK libspdk_bdev_virtio.so 00:03:16.083 LIB libspdk_bdev_raid.a 00:03:16.083 SO libspdk_bdev_raid.so.6.0 00:03:16.341 SYMLINK libspdk_bdev_raid.so 00:03:16.910 LIB libspdk_bdev_nvme.a 00:03:16.910 SO libspdk_bdev_nvme.so.7.0 00:03:17.169 SYMLINK libspdk_bdev_nvme.so 00:03:17.737 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:17.737 CC module/event/subsystems/iobuf/iobuf.o 00:03:17.737 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:17.737 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:17.737 CC module/event/subsystems/vmd/vmd.o 00:03:17.737 CC module/event/subsystems/fsdev/fsdev.o 00:03:17.737 CC module/event/subsystems/keyring/keyring.o 00:03:17.737 CC module/event/subsystems/sock/sock.o 00:03:17.737 CC module/event/subsystems/scheduler/scheduler.o 00:03:17.996 LIB libspdk_event_vhost_blk.a 00:03:17.996 SO libspdk_event_vhost_blk.so.3.0 00:03:17.996 LIB libspdk_event_iobuf.a 00:03:17.996 LIB libspdk_event_fsdev.a 00:03:17.996 LIB libspdk_event_vmd.a 00:03:17.996 LIB libspdk_event_keyring.a 00:03:17.996 LIB libspdk_event_scheduler.a 00:03:17.996 LIB libspdk_event_sock.a 00:03:17.996 SO libspdk_event_fsdev.so.1.0 00:03:17.996 SO libspdk_event_iobuf.so.3.0 00:03:17.996 SO libspdk_event_keyring.so.1.0 00:03:17.996 SO libspdk_event_vmd.so.6.0 00:03:17.996 SYMLINK libspdk_event_vhost_blk.so 00:03:17.996 SO libspdk_event_sock.so.5.0 00:03:17.996 SO libspdk_event_scheduler.so.4.0 00:03:17.996 SYMLINK libspdk_event_fsdev.so 00:03:17.996 SYMLINK libspdk_event_iobuf.so 00:03:17.996 SYMLINK libspdk_event_keyring.so 00:03:17.996 SYMLINK libspdk_event_vmd.so 00:03:17.996 SYMLINK libspdk_event_sock.so 00:03:17.996 SYMLINK libspdk_event_scheduler.so 00:03:18.255 CC module/event/subsystems/accel/accel.o 00:03:18.514 LIB libspdk_event_accel.a 00:03:18.514 SO libspdk_event_accel.so.6.0 00:03:18.514 SYMLINK libspdk_event_accel.so 00:03:19.082 CC module/event/subsystems/bdev/bdev.o 00:03:19.082 LIB libspdk_event_bdev.a 00:03:19.082 SO libspdk_event_bdev.so.6.0 00:03:19.082 SYMLINK libspdk_event_bdev.so 00:03:19.649 CC module/event/subsystems/scsi/scsi.o 00:03:19.649 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:19.649 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:19.649 CC module/event/subsystems/nbd/nbd.o 00:03:19.649 CC module/event/subsystems/ublk/ublk.o 00:03:19.649 LIB libspdk_event_nbd.a 00:03:19.649 LIB libspdk_event_scsi.a 00:03:19.649 LIB libspdk_event_ublk.a 00:03:19.649 SO libspdk_event_ublk.so.3.0 00:03:19.649 SO libspdk_event_nbd.so.6.0 00:03:19.649 LIB libspdk_event_nvmf.a 00:03:19.649 SO libspdk_event_scsi.so.6.0 00:03:19.649 SO libspdk_event_nvmf.so.6.0 00:03:19.649 SYMLINK libspdk_event_scsi.so 00:03:19.908 SYMLINK libspdk_event_nbd.so 00:03:19.908 SYMLINK libspdk_event_ublk.so 00:03:19.908 SYMLINK libspdk_event_nvmf.so 00:03:20.168 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:20.168 CC module/event/subsystems/iscsi/iscsi.o 00:03:20.168 LIB libspdk_event_vhost_scsi.a 00:03:20.168 LIB libspdk_event_iscsi.a 00:03:20.168 SO libspdk_event_vhost_scsi.so.3.0 00:03:20.427 SO libspdk_event_iscsi.so.6.0 00:03:20.427 SYMLINK libspdk_event_vhost_scsi.so 00:03:20.427 SYMLINK libspdk_event_iscsi.so 00:03:20.686 SO libspdk.so.6.0 00:03:20.686 SYMLINK libspdk.so 00:03:20.945 CXX app/trace/trace.o 00:03:20.945 CC app/trace_record/trace_record.o 00:03:20.945 CC app/spdk_nvme_identify/identify.o 00:03:20.945 CC app/spdk_nvme_perf/perf.o 00:03:20.945 CC app/spdk_top/spdk_top.o 00:03:20.945 CC test/rpc_client/rpc_client_test.o 00:03:20.945 CC app/spdk_lspci/spdk_lspci.o 00:03:20.945 TEST_HEADER include/spdk/accel_module.h 00:03:20.945 TEST_HEADER include/spdk/accel.h 00:03:20.945 TEST_HEADER include/spdk/assert.h 00:03:20.945 TEST_HEADER include/spdk/barrier.h 00:03:20.945 TEST_HEADER include/spdk/bdev.h 00:03:20.945 TEST_HEADER include/spdk/base64.h 00:03:20.945 TEST_HEADER include/spdk/bdev_zone.h 00:03:20.945 CC app/spdk_nvme_discover/discovery_aer.o 00:03:20.945 TEST_HEADER include/spdk/bit_pool.h 00:03:20.945 TEST_HEADER include/spdk/bdev_module.h 00:03:20.945 TEST_HEADER include/spdk/bit_array.h 00:03:20.945 TEST_HEADER include/spdk/blob_bdev.h 00:03:20.945 TEST_HEADER include/spdk/blobfs.h 00:03:20.945 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:20.945 TEST_HEADER include/spdk/blob.h 00:03:20.945 TEST_HEADER include/spdk/config.h 00:03:20.945 TEST_HEADER include/spdk/conf.h 00:03:20.945 TEST_HEADER include/spdk/cpuset.h 00:03:20.945 TEST_HEADER include/spdk/crc16.h 00:03:20.945 TEST_HEADER include/spdk/crc32.h 00:03:20.945 TEST_HEADER include/spdk/crc64.h 00:03:20.945 TEST_HEADER include/spdk/dma.h 00:03:20.945 TEST_HEADER include/spdk/dif.h 00:03:20.945 TEST_HEADER include/spdk/env_dpdk.h 00:03:20.945 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:20.945 TEST_HEADER include/spdk/endian.h 00:03:20.945 TEST_HEADER include/spdk/env.h 00:03:20.945 TEST_HEADER include/spdk/event.h 00:03:20.945 TEST_HEADER include/spdk/fd.h 00:03:20.945 TEST_HEADER include/spdk/fd_group.h 00:03:20.945 TEST_HEADER include/spdk/file.h 00:03:20.945 TEST_HEADER include/spdk/ftl.h 00:03:20.945 TEST_HEADER include/spdk/fsdev.h 00:03:20.945 TEST_HEADER include/spdk/fsdev_module.h 00:03:20.945 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:20.945 TEST_HEADER include/spdk/hexlify.h 00:03:20.945 TEST_HEADER include/spdk/gpt_spec.h 00:03:20.945 TEST_HEADER include/spdk/histogram_data.h 00:03:20.945 TEST_HEADER include/spdk/init.h 00:03:20.945 TEST_HEADER include/spdk/idxd.h 00:03:20.945 TEST_HEADER include/spdk/idxd_spec.h 00:03:20.945 TEST_HEADER include/spdk/ioat.h 00:03:20.945 TEST_HEADER include/spdk/ioat_spec.h 00:03:20.945 TEST_HEADER include/spdk/json.h 00:03:20.945 TEST_HEADER include/spdk/jsonrpc.h 00:03:20.945 TEST_HEADER include/spdk/iscsi_spec.h 00:03:20.945 CC app/nvmf_tgt/nvmf_main.o 00:03:20.945 TEST_HEADER include/spdk/keyring_module.h 00:03:20.945 TEST_HEADER include/spdk/keyring.h 00:03:20.945 TEST_HEADER include/spdk/log.h 00:03:20.945 TEST_HEADER include/spdk/likely.h 00:03:20.945 TEST_HEADER include/spdk/lvol.h 00:03:20.945 TEST_HEADER include/spdk/md5.h 00:03:20.945 TEST_HEADER include/spdk/memory.h 00:03:20.945 TEST_HEADER include/spdk/mmio.h 00:03:20.945 TEST_HEADER include/spdk/net.h 00:03:20.945 CC app/iscsi_tgt/iscsi_tgt.o 00:03:20.945 TEST_HEADER include/spdk/nbd.h 00:03:20.945 TEST_HEADER include/spdk/nvme.h 00:03:20.945 TEST_HEADER include/spdk/notify.h 00:03:20.945 TEST_HEADER include/spdk/nvme_intel.h 00:03:20.945 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:20.945 TEST_HEADER include/spdk/nvme_spec.h 00:03:20.945 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:20.945 TEST_HEADER include/spdk/nvme_zns.h 00:03:20.946 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:20.946 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:20.946 TEST_HEADER include/spdk/nvmf_spec.h 00:03:20.946 TEST_HEADER include/spdk/nvmf.h 00:03:20.946 TEST_HEADER include/spdk/opal.h 00:03:20.946 CC app/spdk_dd/spdk_dd.o 00:03:20.946 TEST_HEADER include/spdk/nvmf_transport.h 00:03:20.946 TEST_HEADER include/spdk/pci_ids.h 00:03:20.946 TEST_HEADER include/spdk/opal_spec.h 00:03:20.946 TEST_HEADER include/spdk/queue.h 00:03:20.946 TEST_HEADER include/spdk/pipe.h 00:03:20.946 TEST_HEADER include/spdk/rpc.h 00:03:20.946 TEST_HEADER include/spdk/scheduler.h 00:03:20.946 TEST_HEADER include/spdk/reduce.h 00:03:20.946 TEST_HEADER include/spdk/scsi.h 00:03:20.946 TEST_HEADER include/spdk/sock.h 00:03:20.946 TEST_HEADER include/spdk/scsi_spec.h 00:03:20.946 TEST_HEADER include/spdk/stdinc.h 00:03:20.946 TEST_HEADER include/spdk/string.h 00:03:20.946 TEST_HEADER include/spdk/trace.h 00:03:20.946 TEST_HEADER include/spdk/thread.h 00:03:20.946 TEST_HEADER include/spdk/trace_parser.h 00:03:20.946 TEST_HEADER include/spdk/util.h 00:03:20.946 TEST_HEADER include/spdk/tree.h 00:03:20.946 TEST_HEADER include/spdk/ublk.h 00:03:20.946 TEST_HEADER include/spdk/uuid.h 00:03:20.946 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:20.946 TEST_HEADER include/spdk/version.h 00:03:20.946 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:20.946 TEST_HEADER include/spdk/vhost.h 00:03:20.946 TEST_HEADER include/spdk/vmd.h 00:03:20.946 TEST_HEADER include/spdk/xor.h 00:03:20.946 TEST_HEADER include/spdk/zipf.h 00:03:20.946 CXX test/cpp_headers/accel.o 00:03:20.946 CXX test/cpp_headers/accel_module.o 00:03:20.946 CXX test/cpp_headers/assert.o 00:03:20.946 CC app/spdk_tgt/spdk_tgt.o 00:03:20.946 CXX test/cpp_headers/barrier.o 00:03:20.946 CXX test/cpp_headers/base64.o 00:03:20.946 CXX test/cpp_headers/bdev.o 00:03:20.946 CXX test/cpp_headers/bdev_module.o 00:03:20.946 CXX test/cpp_headers/bit_pool.o 00:03:20.946 CXX test/cpp_headers/bdev_zone.o 00:03:20.946 CXX test/cpp_headers/bit_array.o 00:03:20.946 CXX test/cpp_headers/blobfs.o 00:03:20.946 CXX test/cpp_headers/blob_bdev.o 00:03:20.946 CXX test/cpp_headers/blobfs_bdev.o 00:03:20.946 CXX test/cpp_headers/blob.o 00:03:20.946 CXX test/cpp_headers/conf.o 00:03:20.946 CXX test/cpp_headers/cpuset.o 00:03:20.946 CXX test/cpp_headers/config.o 00:03:20.946 CXX test/cpp_headers/crc16.o 00:03:20.946 CXX test/cpp_headers/crc32.o 00:03:20.946 CXX test/cpp_headers/dif.o 00:03:20.946 CXX test/cpp_headers/dma.o 00:03:21.209 CXX test/cpp_headers/env_dpdk.o 00:03:21.209 CXX test/cpp_headers/crc64.o 00:03:21.209 CXX test/cpp_headers/env.o 00:03:21.209 CXX test/cpp_headers/endian.o 00:03:21.209 CXX test/cpp_headers/fd_group.o 00:03:21.209 CXX test/cpp_headers/event.o 00:03:21.209 CXX test/cpp_headers/fd.o 00:03:21.209 CXX test/cpp_headers/file.o 00:03:21.209 CXX test/cpp_headers/ftl.o 00:03:21.209 CXX test/cpp_headers/fuse_dispatcher.o 00:03:21.209 CXX test/cpp_headers/fsdev.o 00:03:21.209 CXX test/cpp_headers/gpt_spec.o 00:03:21.209 CXX test/cpp_headers/fsdev_module.o 00:03:21.209 CXX test/cpp_headers/histogram_data.o 00:03:21.209 CXX test/cpp_headers/hexlify.o 00:03:21.209 CXX test/cpp_headers/idxd.o 00:03:21.209 CXX test/cpp_headers/idxd_spec.o 00:03:21.209 CC test/env/vtophys/vtophys.o 00:03:21.209 CXX test/cpp_headers/init.o 00:03:21.209 CXX test/cpp_headers/ioat.o 00:03:21.209 CXX test/cpp_headers/ioat_spec.o 00:03:21.209 CXX test/cpp_headers/iscsi_spec.o 00:03:21.209 CXX test/cpp_headers/json.o 00:03:21.209 CXX test/cpp_headers/jsonrpc.o 00:03:21.209 CXX test/cpp_headers/keyring_module.o 00:03:21.209 CXX test/cpp_headers/keyring.o 00:03:21.209 CC test/env/pci/pci_ut.o 00:03:21.209 CXX test/cpp_headers/likely.o 00:03:21.209 CXX test/cpp_headers/log.o 00:03:21.209 CXX test/cpp_headers/memory.o 00:03:21.209 CXX test/cpp_headers/lvol.o 00:03:21.209 CXX test/cpp_headers/md5.o 00:03:21.209 CXX test/cpp_headers/mmio.o 00:03:21.209 CC test/env/memory/memory_ut.o 00:03:21.209 CXX test/cpp_headers/net.o 00:03:21.209 CXX test/cpp_headers/nbd.o 00:03:21.209 CXX test/cpp_headers/notify.o 00:03:21.209 CC examples/util/zipf/zipf.o 00:03:21.209 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:21.209 CXX test/cpp_headers/nvme_intel.o 00:03:21.209 CXX test/cpp_headers/nvme.o 00:03:21.209 CXX test/cpp_headers/nvme_ocssd.o 00:03:21.209 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:21.209 CXX test/cpp_headers/nvme_spec.o 00:03:21.209 CXX test/cpp_headers/nvme_zns.o 00:03:21.209 CXX test/cpp_headers/nvmf_cmd.o 00:03:21.209 CXX test/cpp_headers/nvmf.o 00:03:21.209 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:21.209 CXX test/cpp_headers/nvmf_transport.o 00:03:21.209 CXX test/cpp_headers/nvmf_spec.o 00:03:21.209 CXX test/cpp_headers/opal_spec.o 00:03:21.209 CXX test/cpp_headers/opal.o 00:03:21.209 CC examples/ioat/perf/perf.o 00:03:21.209 CXX test/cpp_headers/pci_ids.o 00:03:21.209 CC examples/ioat/verify/verify.o 00:03:21.209 CXX test/cpp_headers/queue.o 00:03:21.209 CXX test/cpp_headers/reduce.o 00:03:21.209 CC test/thread/poller_perf/poller_perf.o 00:03:21.209 CXX test/cpp_headers/pipe.o 00:03:21.209 CXX test/cpp_headers/rpc.o 00:03:21.209 CXX test/cpp_headers/scheduler.o 00:03:21.209 CXX test/cpp_headers/scsi.o 00:03:21.209 CXX test/cpp_headers/scsi_spec.o 00:03:21.209 CXX test/cpp_headers/sock.o 00:03:21.209 CXX test/cpp_headers/stdinc.o 00:03:21.209 CXX test/cpp_headers/string.o 00:03:21.209 CXX test/cpp_headers/thread.o 00:03:21.209 CXX test/cpp_headers/trace.o 00:03:21.209 CXX test/cpp_headers/trace_parser.o 00:03:21.209 CXX test/cpp_headers/tree.o 00:03:21.209 CC test/app/histogram_perf/histogram_perf.o 00:03:21.209 CC test/app/jsoncat/jsoncat.o 00:03:21.209 CC test/app/stub/stub.o 00:03:21.209 CC test/app/bdev_svc/bdev_svc.o 00:03:21.209 CC test/dma/test_dma/test_dma.o 00:03:21.209 CC app/fio/nvme/fio_plugin.o 00:03:21.209 LINK spdk_lspci 00:03:21.209 CXX test/cpp_headers/ublk.o 00:03:21.209 CC app/fio/bdev/fio_plugin.o 00:03:21.496 LINK spdk_nvme_discover 00:03:21.496 LINK rpc_client_test 00:03:21.496 CC test/env/mem_callbacks/mem_callbacks.o 00:03:21.496 LINK interrupt_tgt 00:03:21.763 LINK nvmf_tgt 00:03:21.763 LINK spdk_trace_record 00:03:21.763 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:21.763 LINK vtophys 00:03:21.763 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:21.763 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:21.763 LINK histogram_perf 00:03:21.763 LINK iscsi_tgt 00:03:21.763 LINK env_dpdk_post_init 00:03:21.763 LINK poller_perf 00:03:21.763 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:21.763 CXX test/cpp_headers/util.o 00:03:22.022 CXX test/cpp_headers/uuid.o 00:03:22.022 LINK zipf 00:03:22.023 CXX test/cpp_headers/version.o 00:03:22.023 CXX test/cpp_headers/vfio_user_pci.o 00:03:22.023 CXX test/cpp_headers/vfio_user_spec.o 00:03:22.023 LINK spdk_tgt 00:03:22.023 CXX test/cpp_headers/vhost.o 00:03:22.023 CXX test/cpp_headers/vmd.o 00:03:22.023 CXX test/cpp_headers/xor.o 00:03:22.023 CXX test/cpp_headers/zipf.o 00:03:22.023 LINK jsoncat 00:03:22.023 LINK stub 00:03:22.023 LINK bdev_svc 00:03:22.023 LINK ioat_perf 00:03:22.023 LINK verify 00:03:22.023 LINK spdk_trace 00:03:22.023 LINK spdk_dd 00:03:22.023 LINK mem_callbacks 00:03:22.023 LINK pci_ut 00:03:22.281 LINK spdk_bdev 00:03:22.281 LINK nvme_fuzz 00:03:22.281 LINK spdk_nvme 00:03:22.281 LINK test_dma 00:03:22.281 LINK spdk_nvme_perf 00:03:22.281 LINK vhost_fuzz 00:03:22.281 LINK spdk_nvme_identify 00:03:22.281 LINK memory_ut 00:03:22.281 CC examples/sock/hello_world/hello_sock.o 00:03:22.281 CC examples/vmd/led/led.o 00:03:22.281 CC examples/vmd/lsvmd/lsvmd.o 00:03:22.281 CC examples/idxd/perf/perf.o 00:03:22.540 CC app/vhost/vhost.o 00:03:22.540 CC examples/thread/thread/thread_ex.o 00:03:22.540 CC test/event/reactor_perf/reactor_perf.o 00:03:22.540 LINK spdk_top 00:03:22.540 CC test/event/event_perf/event_perf.o 00:03:22.540 CC test/event/reactor/reactor.o 00:03:22.540 CC test/event/app_repeat/app_repeat.o 00:03:22.540 CC test/event/scheduler/scheduler.o 00:03:22.540 LINK led 00:03:22.540 LINK lsvmd 00:03:22.540 LINK reactor_perf 00:03:22.540 LINK hello_sock 00:03:22.540 LINK event_perf 00:03:22.540 LINK reactor 00:03:22.540 LINK vhost 00:03:22.540 LINK app_repeat 00:03:22.800 LINK idxd_perf 00:03:22.800 LINK thread 00:03:22.800 LINK scheduler 00:03:22.800 CC test/nvme/boot_partition/boot_partition.o 00:03:22.800 CC test/nvme/compliance/nvme_compliance.o 00:03:22.800 CC test/nvme/reset/reset.o 00:03:22.800 CC test/nvme/sgl/sgl.o 00:03:22.800 CC test/nvme/startup/startup.o 00:03:22.800 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:22.800 CC test/nvme/aer/aer.o 00:03:22.800 CC test/nvme/cuse/cuse.o 00:03:22.800 CC test/nvme/overhead/overhead.o 00:03:22.800 CC test/nvme/fdp/fdp.o 00:03:22.800 CC test/nvme/e2edp/nvme_dp.o 00:03:22.800 CC test/nvme/fused_ordering/fused_ordering.o 00:03:22.800 CC test/nvme/err_injection/err_injection.o 00:03:22.800 CC test/nvme/connect_stress/connect_stress.o 00:03:22.800 CC test/nvme/simple_copy/simple_copy.o 00:03:22.800 CC test/accel/dif/dif.o 00:03:22.800 CC test/nvme/reserve/reserve.o 00:03:22.800 CC test/blobfs/mkfs/mkfs.o 00:03:23.066 CC test/lvol/esnap/esnap.o 00:03:23.066 LINK boot_partition 00:03:23.066 LINK doorbell_aers 00:03:23.066 LINK startup 00:03:23.066 CC examples/nvme/arbitration/arbitration.o 00:03:23.066 CC examples/nvme/reconnect/reconnect.o 00:03:23.066 CC examples/nvme/hello_world/hello_world.o 00:03:23.066 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:23.066 LINK connect_stress 00:03:23.066 CC examples/nvme/abort/abort.o 00:03:23.066 LINK fused_ordering 00:03:23.066 LINK err_injection 00:03:23.066 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:23.066 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:23.066 LINK simple_copy 00:03:23.066 CC examples/nvme/hotplug/hotplug.o 00:03:23.066 LINK mkfs 00:03:23.066 LINK reserve 00:03:23.066 LINK reset 00:03:23.066 LINK sgl 00:03:23.066 LINK overhead 00:03:23.066 LINK nvme_dp 00:03:23.066 LINK aer 00:03:23.066 LINK nvme_compliance 00:03:23.066 LINK fdp 00:03:23.066 LINK iscsi_fuzz 00:03:23.323 CC examples/accel/perf/accel_perf.o 00:03:23.323 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:23.323 CC examples/blob/cli/blobcli.o 00:03:23.323 CC examples/blob/hello_world/hello_blob.o 00:03:23.323 LINK pmr_persistence 00:03:23.323 LINK cmb_copy 00:03:23.323 LINK hello_world 00:03:23.323 LINK hotplug 00:03:23.323 LINK reconnect 00:03:23.323 LINK arbitration 00:03:23.323 LINK abort 00:03:23.323 LINK dif 00:03:23.582 LINK hello_blob 00:03:23.582 LINK hello_fsdev 00:03:23.582 LINK nvme_manage 00:03:23.582 LINK accel_perf 00:03:23.582 LINK blobcli 00:03:23.840 LINK cuse 00:03:24.099 CC test/bdev/bdevio/bdevio.o 00:03:24.099 CC examples/bdev/hello_world/hello_bdev.o 00:03:24.099 CC examples/bdev/bdevperf/bdevperf.o 00:03:24.357 LINK bdevio 00:03:24.357 LINK hello_bdev 00:03:24.615 LINK bdevperf 00:03:25.183 CC examples/nvmf/nvmf/nvmf.o 00:03:25.442 LINK nvmf 00:03:26.466 LINK esnap 00:03:26.735 00:03:26.735 real 0m53.030s 00:03:26.735 user 6m9.137s 00:03:26.735 sys 2m54.948s 00:03:26.735 21:32:58 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:26.735 21:32:58 make -- common/autotest_common.sh@10 -- $ set +x 00:03:26.735 ************************************ 00:03:26.735 END TEST make 00:03:26.735 ************************************ 00:03:26.735 21:32:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:26.735 21:32:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:26.735 21:32:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:26.735 21:32:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.735 21:32:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:26.735 21:32:58 -- pm/common@44 -- $ pid=2741194 00:03:26.735 21:32:58 -- pm/common@50 -- $ kill -TERM 2741194 00:03:26.735 21:32:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.735 21:32:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:26.735 21:32:58 -- pm/common@44 -- $ pid=2741195 00:03:26.735 21:32:58 -- pm/common@50 -- $ kill -TERM 2741195 00:03:26.735 21:32:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.735 21:32:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:26.735 21:32:58 -- pm/common@44 -- $ pid=2741199 00:03:26.735 21:32:58 -- pm/common@50 -- $ kill -TERM 2741199 00:03:26.735 21:32:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.735 21:32:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:26.735 21:32:58 -- pm/common@44 -- $ pid=2741223 00:03:26.735 21:32:58 -- pm/common@50 -- $ sudo -E kill -TERM 2741223 00:03:26.994 21:32:59 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:26.994 21:32:59 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:26.994 21:32:59 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:26.994 21:32:59 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:26.994 21:32:59 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:26.994 21:32:59 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:26.994 21:32:59 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:26.994 21:32:59 -- scripts/common.sh@336 -- # IFS=.-: 00:03:26.994 21:32:59 -- scripts/common.sh@336 -- # read -ra ver1 00:03:26.994 21:32:59 -- scripts/common.sh@337 -- # IFS=.-: 00:03:26.994 21:32:59 -- scripts/common.sh@337 -- # read -ra ver2 00:03:26.994 21:32:59 -- scripts/common.sh@338 -- # local 'op=<' 00:03:26.994 21:32:59 -- scripts/common.sh@340 -- # ver1_l=2 00:03:26.994 21:32:59 -- scripts/common.sh@341 -- # ver2_l=1 00:03:26.994 21:32:59 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:26.994 21:32:59 -- scripts/common.sh@344 -- # case "$op" in 00:03:26.994 21:32:59 -- scripts/common.sh@345 -- # : 1 00:03:26.994 21:32:59 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:26.994 21:32:59 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:26.994 21:32:59 -- scripts/common.sh@365 -- # decimal 1 00:03:26.994 21:32:59 -- scripts/common.sh@353 -- # local d=1 00:03:26.994 21:32:59 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:26.994 21:32:59 -- scripts/common.sh@355 -- # echo 1 00:03:26.994 21:32:59 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:26.994 21:32:59 -- scripts/common.sh@366 -- # decimal 2 00:03:26.994 21:32:59 -- scripts/common.sh@353 -- # local d=2 00:03:26.994 21:32:59 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:26.994 21:32:59 -- scripts/common.sh@355 -- # echo 2 00:03:26.994 21:32:59 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:26.994 21:32:59 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:26.994 21:32:59 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:26.994 21:32:59 -- scripts/common.sh@368 -- # return 0 00:03:26.994 21:32:59 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:26.994 21:32:59 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:26.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.994 --rc genhtml_branch_coverage=1 00:03:26.994 --rc genhtml_function_coverage=1 00:03:26.994 --rc genhtml_legend=1 00:03:26.994 --rc geninfo_all_blocks=1 00:03:26.994 --rc geninfo_unexecuted_blocks=1 00:03:26.994 00:03:26.994 ' 00:03:26.994 21:32:59 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:26.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.994 --rc genhtml_branch_coverage=1 00:03:26.994 --rc genhtml_function_coverage=1 00:03:26.994 --rc genhtml_legend=1 00:03:26.994 --rc geninfo_all_blocks=1 00:03:26.994 --rc geninfo_unexecuted_blocks=1 00:03:26.994 00:03:26.994 ' 00:03:26.994 21:32:59 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:26.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.994 --rc genhtml_branch_coverage=1 00:03:26.994 --rc genhtml_function_coverage=1 00:03:26.994 --rc genhtml_legend=1 00:03:26.994 --rc geninfo_all_blocks=1 00:03:26.994 --rc geninfo_unexecuted_blocks=1 00:03:26.994 00:03:26.994 ' 00:03:26.994 21:32:59 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:26.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.994 --rc genhtml_branch_coverage=1 00:03:26.994 --rc genhtml_function_coverage=1 00:03:26.994 --rc genhtml_legend=1 00:03:26.994 --rc geninfo_all_blocks=1 00:03:26.994 --rc geninfo_unexecuted_blocks=1 00:03:26.994 00:03:26.994 ' 00:03:26.994 21:32:59 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:26.994 21:32:59 -- nvmf/common.sh@7 -- # uname -s 00:03:26.994 21:32:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:26.994 21:32:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:26.994 21:32:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:26.994 21:32:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:26.994 21:32:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:26.994 21:32:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:26.994 21:32:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:26.994 21:32:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:26.994 21:32:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:26.994 21:32:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:26.994 21:32:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:26.994 21:32:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:26.994 21:32:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:26.994 21:32:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:26.994 21:32:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:26.994 21:32:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:26.994 21:32:59 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:26.994 21:32:59 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:26.994 21:32:59 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:26.994 21:32:59 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:26.994 21:32:59 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:26.994 21:32:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.994 21:32:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.994 21:32:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.994 21:32:59 -- paths/export.sh@5 -- # export PATH 00:03:26.995 21:32:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.995 21:32:59 -- nvmf/common.sh@51 -- # : 0 00:03:26.995 21:32:59 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:26.995 21:32:59 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:26.995 21:32:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:26.995 21:32:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:26.995 21:32:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:26.995 21:32:59 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:26.995 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:26.995 21:32:59 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:26.995 21:32:59 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:26.995 21:32:59 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:26.995 21:32:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:26.995 21:32:59 -- spdk/autotest.sh@32 -- # uname -s 00:03:26.995 21:32:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:26.995 21:32:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:26.995 21:32:59 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:26.995 21:32:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:26.995 21:32:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:26.995 21:32:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:26.995 21:32:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:26.995 21:32:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:26.995 21:32:59 -- spdk/autotest.sh@48 -- # udevadm_pid=2819298 00:03:26.995 21:32:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:26.995 21:32:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:26.995 21:32:59 -- pm/common@17 -- # local monitor 00:03:26.995 21:32:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.995 21:32:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.995 21:32:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.995 21:32:59 -- pm/common@21 -- # date +%s 00:03:26.995 21:32:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.995 21:32:59 -- pm/common@21 -- # date +%s 00:03:26.995 21:32:59 -- pm/common@21 -- # date +%s 00:03:26.995 21:32:59 -- pm/common@25 -- # sleep 1 00:03:26.995 21:32:59 -- pm/common@21 -- # date +%s 00:03:26.995 21:32:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732912379 00:03:26.995 21:32:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732912379 00:03:26.995 21:32:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732912379 00:03:26.995 21:32:59 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732912379 00:03:27.253 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732912379_collect-cpu-temp.pm.log 00:03:27.253 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732912379_collect-cpu-load.pm.log 00:03:27.253 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732912379_collect-vmstat.pm.log 00:03:27.253 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732912379_collect-bmc-pm.bmc.pm.log 00:03:28.186 21:33:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:28.186 21:33:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:28.186 21:33:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:28.186 21:33:00 -- common/autotest_common.sh@10 -- # set +x 00:03:28.186 21:33:00 -- spdk/autotest.sh@59 -- # create_test_list 00:03:28.186 21:33:00 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:28.186 21:33:00 -- common/autotest_common.sh@10 -- # set +x 00:03:28.186 21:33:00 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:28.186 21:33:00 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:28.186 21:33:00 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:28.186 21:33:00 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:28.186 21:33:00 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:28.186 21:33:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:28.186 21:33:00 -- common/autotest_common.sh@1455 -- # uname 00:03:28.186 21:33:00 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:28.186 21:33:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:28.186 21:33:00 -- common/autotest_common.sh@1475 -- # uname 00:03:28.186 21:33:00 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:28.186 21:33:00 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:28.186 21:33:00 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:28.186 lcov: LCOV version 1.15 00:03:28.186 21:33:00 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:46.274 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:46.274 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:52.844 21:33:24 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:52.844 21:33:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:52.844 21:33:24 -- common/autotest_common.sh@10 -- # set +x 00:03:52.844 21:33:24 -- spdk/autotest.sh@78 -- # rm -f 00:03:52.844 21:33:24 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.136 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:56.136 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:56.136 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:56.136 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:56.136 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:56.136 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:56.136 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:56.136 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:56.136 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:56.136 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:56.136 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:56.136 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:56.136 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:56.396 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:56.396 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:56.396 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:56.396 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:56.396 21:33:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:56.396 21:33:28 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:56.396 21:33:28 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:56.396 21:33:28 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:56.396 21:33:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:56.396 21:33:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:56.396 21:33:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:56.396 21:33:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.396 21:33:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:56.396 21:33:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:56.396 21:33:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.396 21:33:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.396 21:33:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:56.396 21:33:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:56.396 21:33:28 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:56.396 No valid GPT data, bailing 00:03:56.396 21:33:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.396 21:33:28 -- scripts/common.sh@394 -- # pt= 00:03:56.396 21:33:28 -- scripts/common.sh@395 -- # return 1 00:03:56.396 21:33:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:56.396 1+0 records in 00:03:56.396 1+0 records out 00:03:56.396 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0017605 s, 596 MB/s 00:03:56.396 21:33:28 -- spdk/autotest.sh@105 -- # sync 00:03:56.396 21:33:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:56.396 21:33:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:56.396 21:33:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:04.525 21:33:35 -- spdk/autotest.sh@111 -- # uname -s 00:04:04.525 21:33:35 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:04.525 21:33:35 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:04.525 21:33:35 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:07.063 Hugepages 00:04:07.063 node hugesize free / total 00:04:07.063 node0 1048576kB 0 / 0 00:04:07.063 node0 2048kB 0 / 0 00:04:07.063 node1 1048576kB 0 / 0 00:04:07.063 node1 2048kB 0 / 0 00:04:07.063 00:04:07.063 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:07.063 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:07.063 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:07.063 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:07.063 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:07.063 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:07.063 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:07.063 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:07.063 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:07.063 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:07.063 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:07.063 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:07.063 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:07.063 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:07.063 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:07.323 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:07.323 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:07.323 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:07.323 21:33:39 -- spdk/autotest.sh@117 -- # uname -s 00:04:07.323 21:33:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:07.323 21:33:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:07.323 21:33:39 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:10.614 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:10.614 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.148 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:13.148 21:33:44 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:13.741 21:33:45 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:13.741 21:33:45 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:13.741 21:33:45 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:13.741 21:33:45 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:13.741 21:33:45 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:13.741 21:33:45 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:13.741 21:33:45 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.741 21:33:45 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:13.741 21:33:45 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:14.001 21:33:46 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:14.001 21:33:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:04:14.001 21:33:46 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.288 Waiting for block devices as requested 00:04:17.288 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:17.288 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:17.288 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:17.288 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:17.288 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:17.288 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:17.288 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:17.547 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:17.547 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:17.547 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:17.805 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:17.805 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:17.805 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:18.064 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:18.064 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:18.064 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:18.323 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:18.323 21:33:50 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:18.323 21:33:50 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:18.323 21:33:50 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:18.323 21:33:50 -- common/autotest_common.sh@1485 -- # grep 0000:d8:00.0/nvme/nvme 00:04:18.323 21:33:50 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:18.323 21:33:50 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:18.323 21:33:50 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:18.323 21:33:50 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:18.323 21:33:50 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:18.323 21:33:50 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:18.323 21:33:50 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:18.323 21:33:50 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:18.323 21:33:50 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:18.323 21:33:50 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:04:18.323 21:33:50 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:18.323 21:33:50 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:18.323 21:33:50 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:18.323 21:33:50 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:18.323 21:33:50 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:18.323 21:33:50 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:18.323 21:33:50 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:18.323 21:33:50 -- common/autotest_common.sh@1541 -- # continue 00:04:18.323 21:33:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:18.323 21:33:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.323 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:04:18.582 21:33:50 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:18.582 21:33:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:18.582 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:04:18.583 21:33:50 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:21.931 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.931 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.460 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:24.460 21:33:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:24.460 21:33:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.460 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:04:24.460 21:33:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:24.460 21:33:56 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:24.460 21:33:56 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:24.460 21:33:56 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:24.460 21:33:56 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:24.460 21:33:56 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:24.460 21:33:56 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:24.460 21:33:56 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:24.460 21:33:56 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:24.460 21:33:56 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:24.460 21:33:56 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.460 21:33:56 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:24.460 21:33:56 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:24.460 21:33:56 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:24.460 21:33:56 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:04:24.460 21:33:56 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:24.460 21:33:56 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:24.460 21:33:56 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:24.460 21:33:56 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:24.460 21:33:56 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:24.460 21:33:56 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:24.460 21:33:56 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:d8:00.0 00:04:24.460 21:33:56 -- common/autotest_common.sh@1577 -- # [[ -z 0000:d8:00.0 ]] 00:04:24.460 21:33:56 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2835870 00:04:24.460 21:33:56 -- common/autotest_common.sh@1583 -- # waitforlisten 2835870 00:04:24.460 21:33:56 -- common/autotest_common.sh@831 -- # '[' -z 2835870 ']' 00:04:24.461 21:33:56 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.461 21:33:56 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.461 21:33:56 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.461 21:33:56 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.461 21:33:56 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.461 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:04:24.461 [2024-11-29 21:33:56.370626] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:04:24.461 [2024-11-29 21:33:56.370679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835870 ] 00:04:24.461 [2024-11-29 21:33:56.441418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.461 [2024-11-29 21:33:56.481119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.461 21:33:56 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:24.461 21:33:56 -- common/autotest_common.sh@864 -- # return 0 00:04:24.461 21:33:56 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:24.461 21:33:56 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:24.461 21:33:56 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:27.752 nvme0n1 00:04:27.752 21:33:59 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:27.752 [2024-11-29 21:33:59.848365] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:27.752 request: 00:04:27.752 { 00:04:27.752 "nvme_ctrlr_name": "nvme0", 00:04:27.752 "password": "test", 00:04:27.752 "method": "bdev_nvme_opal_revert", 00:04:27.752 "req_id": 1 00:04:27.752 } 00:04:27.752 Got JSON-RPC error response 00:04:27.752 response: 00:04:27.752 { 00:04:27.752 "code": -32602, 00:04:27.752 "message": "Invalid parameters" 00:04:27.752 } 00:04:27.752 21:33:59 -- common/autotest_common.sh@1589 -- # true 00:04:27.752 21:33:59 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:27.752 21:33:59 -- common/autotest_common.sh@1593 -- # killprocess 2835870 00:04:27.752 21:33:59 -- common/autotest_common.sh@950 -- # '[' -z 2835870 ']' 00:04:27.752 21:33:59 -- common/autotest_common.sh@954 -- # kill -0 2835870 00:04:27.752 21:33:59 -- common/autotest_common.sh@955 -- # uname 00:04:27.752 21:33:59 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:27.752 21:33:59 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2835870 00:04:27.752 21:33:59 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:27.752 21:33:59 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:27.752 21:33:59 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2835870' 00:04:27.752 killing process with pid 2835870 00:04:27.752 21:33:59 -- common/autotest_common.sh@969 -- # kill 2835870 00:04:27.752 21:33:59 -- common/autotest_common.sh@974 -- # wait 2835870 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:27.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:28.012 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:30.616 21:34:02 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:30.616 21:34:02 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:30.616 21:34:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:30.616 21:34:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:30.616 21:34:02 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:30.616 21:34:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:30.616 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:04:30.616 21:34:02 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:30.616 21:34:02 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:30.616 21:34:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.616 21:34:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.616 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:04:30.616 ************************************ 00:04:30.616 START TEST env 00:04:30.616 ************************************ 00:04:30.616 21:34:02 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:30.616 * Looking for test storage... 00:04:30.616 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:30.616 21:34:02 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:30.616 21:34:02 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:30.616 21:34:02 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:30.616 21:34:02 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:30.616 21:34:02 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.616 21:34:02 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.616 21:34:02 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.616 21:34:02 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.616 21:34:02 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.616 21:34:02 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.616 21:34:02 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.616 21:34:02 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.616 21:34:02 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.616 21:34:02 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.616 21:34:02 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.616 21:34:02 env -- scripts/common.sh@344 -- # case "$op" in 00:04:30.616 21:34:02 env -- scripts/common.sh@345 -- # : 1 00:04:30.616 21:34:02 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.616 21:34:02 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.616 21:34:02 env -- scripts/common.sh@365 -- # decimal 1 00:04:30.616 21:34:02 env -- scripts/common.sh@353 -- # local d=1 00:04:30.616 21:34:02 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.616 21:34:02 env -- scripts/common.sh@355 -- # echo 1 00:04:30.616 21:34:02 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.616 21:34:02 env -- scripts/common.sh@366 -- # decimal 2 00:04:30.616 21:34:02 env -- scripts/common.sh@353 -- # local d=2 00:04:30.616 21:34:02 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.617 21:34:02 env -- scripts/common.sh@355 -- # echo 2 00:04:30.617 21:34:02 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.617 21:34:02 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.617 21:34:02 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.617 21:34:02 env -- scripts/common.sh@368 -- # return 0 00:04:30.617 21:34:02 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.617 21:34:02 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:30.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.617 --rc genhtml_branch_coverage=1 00:04:30.617 --rc genhtml_function_coverage=1 00:04:30.617 --rc genhtml_legend=1 00:04:30.617 --rc geninfo_all_blocks=1 00:04:30.617 --rc geninfo_unexecuted_blocks=1 00:04:30.617 00:04:30.617 ' 00:04:30.617 21:34:02 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:30.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.617 --rc genhtml_branch_coverage=1 00:04:30.617 --rc genhtml_function_coverage=1 00:04:30.617 --rc genhtml_legend=1 00:04:30.617 --rc geninfo_all_blocks=1 00:04:30.617 --rc geninfo_unexecuted_blocks=1 00:04:30.617 00:04:30.617 ' 00:04:30.617 21:34:02 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:30.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.617 --rc genhtml_branch_coverage=1 00:04:30.617 --rc genhtml_function_coverage=1 00:04:30.617 --rc genhtml_legend=1 00:04:30.617 --rc geninfo_all_blocks=1 00:04:30.617 --rc geninfo_unexecuted_blocks=1 00:04:30.617 00:04:30.617 ' 00:04:30.617 21:34:02 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:30.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.617 --rc genhtml_branch_coverage=1 00:04:30.617 --rc genhtml_function_coverage=1 00:04:30.617 --rc genhtml_legend=1 00:04:30.617 --rc geninfo_all_blocks=1 00:04:30.617 --rc geninfo_unexecuted_blocks=1 00:04:30.617 00:04:30.617 ' 00:04:30.617 21:34:02 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:30.617 21:34:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.617 21:34:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.617 21:34:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.617 ************************************ 00:04:30.617 START TEST env_memory 00:04:30.617 ************************************ 00:04:30.617 21:34:02 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:30.617 00:04:30.617 00:04:30.617 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.617 http://cunit.sourceforge.net/ 00:04:30.617 00:04:30.617 00:04:30.617 Suite: memory 00:04:30.617 Test: alloc and free memory map ...[2024-11-29 21:34:02.832087] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:30.617 passed 00:04:30.617 Test: mem map translation ...[2024-11-29 21:34:02.850243] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:30.617 [2024-11-29 21:34:02.850259] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:30.617 [2024-11-29 21:34:02.850296] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:30.617 [2024-11-29 21:34:02.850305] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:30.875 passed 00:04:30.876 Test: mem map registration ...[2024-11-29 21:34:02.885208] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:30.876 [2024-11-29 21:34:02.885223] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:30.876 passed 00:04:30.876 Test: mem map adjacent registrations ...passed 00:04:30.876 00:04:30.876 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.876 suites 1 1 n/a 0 0 00:04:30.876 tests 4 4 4 0 0 00:04:30.876 asserts 152 152 152 0 n/a 00:04:30.876 00:04:30.876 Elapsed time = 0.131 seconds 00:04:30.876 00:04:30.876 real 0m0.144s 00:04:30.876 user 0m0.131s 00:04:30.876 sys 0m0.013s 00:04:30.876 21:34:02 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.876 21:34:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:30.876 ************************************ 00:04:30.876 END TEST env_memory 00:04:30.876 ************************************ 00:04:30.876 21:34:02 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.876 21:34:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.876 21:34:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.876 21:34:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.876 ************************************ 00:04:30.876 START TEST env_vtophys 00:04:30.876 ************************************ 00:04:30.876 21:34:03 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.876 EAL: lib.eal log level changed from notice to debug 00:04:30.876 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.876 EAL: Detected lcore 1 as core 1 on socket 0 00:04:30.876 EAL: Detected lcore 2 as core 2 on socket 0 00:04:30.876 EAL: Detected lcore 3 as core 3 on socket 0 00:04:30.876 EAL: Detected lcore 4 as core 4 on socket 0 00:04:30.876 EAL: Detected lcore 5 as core 5 on socket 0 00:04:30.876 EAL: Detected lcore 6 as core 6 on socket 0 00:04:30.876 EAL: Detected lcore 7 as core 8 on socket 0 00:04:30.876 EAL: Detected lcore 8 as core 9 on socket 0 00:04:30.876 EAL: Detected lcore 9 as core 10 on socket 0 00:04:30.876 EAL: Detected lcore 10 as core 11 on socket 0 00:04:30.876 EAL: Detected lcore 11 as core 12 on socket 0 00:04:30.876 EAL: Detected lcore 12 as core 13 on socket 0 00:04:30.876 EAL: Detected lcore 13 as core 14 on socket 0 00:04:30.876 EAL: Detected lcore 14 as core 16 on socket 0 00:04:30.876 EAL: Detected lcore 15 as core 17 on socket 0 00:04:30.876 EAL: Detected lcore 16 as core 18 on socket 0 00:04:30.876 EAL: Detected lcore 17 as core 19 on socket 0 00:04:30.876 EAL: Detected lcore 18 as core 20 on socket 0 00:04:30.876 EAL: Detected lcore 19 as core 21 on socket 0 00:04:30.876 EAL: Detected lcore 20 as core 22 on socket 0 00:04:30.876 EAL: Detected lcore 21 as core 24 on socket 0 00:04:30.876 EAL: Detected lcore 22 as core 25 on socket 0 00:04:30.876 EAL: Detected lcore 23 as core 26 on socket 0 00:04:30.876 EAL: Detected lcore 24 as core 27 on socket 0 00:04:30.876 EAL: Detected lcore 25 as core 28 on socket 0 00:04:30.876 EAL: Detected lcore 26 as core 29 on socket 0 00:04:30.876 EAL: Detected lcore 27 as core 30 on socket 0 00:04:30.876 EAL: Detected lcore 28 as core 0 on socket 1 00:04:30.876 EAL: Detected lcore 29 as core 1 on socket 1 00:04:30.876 EAL: Detected lcore 30 as core 2 on socket 1 00:04:30.876 EAL: Detected lcore 31 as core 3 on socket 1 00:04:30.876 EAL: Detected lcore 32 as core 4 on socket 1 00:04:30.876 EAL: Detected lcore 33 as core 5 on socket 1 00:04:30.876 EAL: Detected lcore 34 as core 6 on socket 1 00:04:30.876 EAL: Detected lcore 35 as core 8 on socket 1 00:04:30.876 EAL: Detected lcore 36 as core 9 on socket 1 00:04:30.876 EAL: Detected lcore 37 as core 10 on socket 1 00:04:30.876 EAL: Detected lcore 38 as core 11 on socket 1 00:04:30.876 EAL: Detected lcore 39 as core 12 on socket 1 00:04:30.876 EAL: Detected lcore 40 as core 13 on socket 1 00:04:30.876 EAL: Detected lcore 41 as core 14 on socket 1 00:04:30.876 EAL: Detected lcore 42 as core 16 on socket 1 00:04:30.876 EAL: Detected lcore 43 as core 17 on socket 1 00:04:30.876 EAL: Detected lcore 44 as core 18 on socket 1 00:04:30.876 EAL: Detected lcore 45 as core 19 on socket 1 00:04:30.876 EAL: Detected lcore 46 as core 20 on socket 1 00:04:30.876 EAL: Detected lcore 47 as core 21 on socket 1 00:04:30.876 EAL: Detected lcore 48 as core 22 on socket 1 00:04:30.876 EAL: Detected lcore 49 as core 24 on socket 1 00:04:30.876 EAL: Detected lcore 50 as core 25 on socket 1 00:04:30.876 EAL: Detected lcore 51 as core 26 on socket 1 00:04:30.876 EAL: Detected lcore 52 as core 27 on socket 1 00:04:30.876 EAL: Detected lcore 53 as core 28 on socket 1 00:04:30.876 EAL: Detected lcore 54 as core 29 on socket 1 00:04:30.876 EAL: Detected lcore 55 as core 30 on socket 1 00:04:30.876 EAL: Detected lcore 56 as core 0 on socket 0 00:04:30.876 EAL: Detected lcore 57 as core 1 on socket 0 00:04:30.876 EAL: Detected lcore 58 as core 2 on socket 0 00:04:30.876 EAL: Detected lcore 59 as core 3 on socket 0 00:04:30.876 EAL: Detected lcore 60 as core 4 on socket 0 00:04:30.876 EAL: Detected lcore 61 as core 5 on socket 0 00:04:30.876 EAL: Detected lcore 62 as core 6 on socket 0 00:04:30.876 EAL: Detected lcore 63 as core 8 on socket 0 00:04:30.876 EAL: Detected lcore 64 as core 9 on socket 0 00:04:30.876 EAL: Detected lcore 65 as core 10 on socket 0 00:04:30.876 EAL: Detected lcore 66 as core 11 on socket 0 00:04:30.876 EAL: Detected lcore 67 as core 12 on socket 0 00:04:30.876 EAL: Detected lcore 68 as core 13 on socket 0 00:04:30.876 EAL: Detected lcore 69 as core 14 on socket 0 00:04:30.876 EAL: Detected lcore 70 as core 16 on socket 0 00:04:30.876 EAL: Detected lcore 71 as core 17 on socket 0 00:04:30.876 EAL: Detected lcore 72 as core 18 on socket 0 00:04:30.876 EAL: Detected lcore 73 as core 19 on socket 0 00:04:30.876 EAL: Detected lcore 74 as core 20 on socket 0 00:04:30.876 EAL: Detected lcore 75 as core 21 on socket 0 00:04:30.876 EAL: Detected lcore 76 as core 22 on socket 0 00:04:30.876 EAL: Detected lcore 77 as core 24 on socket 0 00:04:30.876 EAL: Detected lcore 78 as core 25 on socket 0 00:04:30.876 EAL: Detected lcore 79 as core 26 on socket 0 00:04:30.876 EAL: Detected lcore 80 as core 27 on socket 0 00:04:30.876 EAL: Detected lcore 81 as core 28 on socket 0 00:04:30.876 EAL: Detected lcore 82 as core 29 on socket 0 00:04:30.876 EAL: Detected lcore 83 as core 30 on socket 0 00:04:30.876 EAL: Detected lcore 84 as core 0 on socket 1 00:04:30.876 EAL: Detected lcore 85 as core 1 on socket 1 00:04:30.876 EAL: Detected lcore 86 as core 2 on socket 1 00:04:30.876 EAL: Detected lcore 87 as core 3 on socket 1 00:04:30.876 EAL: Detected lcore 88 as core 4 on socket 1 00:04:30.876 EAL: Detected lcore 89 as core 5 on socket 1 00:04:30.876 EAL: Detected lcore 90 as core 6 on socket 1 00:04:30.876 EAL: Detected lcore 91 as core 8 on socket 1 00:04:30.876 EAL: Detected lcore 92 as core 9 on socket 1 00:04:30.876 EAL: Detected lcore 93 as core 10 on socket 1 00:04:30.876 EAL: Detected lcore 94 as core 11 on socket 1 00:04:30.876 EAL: Detected lcore 95 as core 12 on socket 1 00:04:30.877 EAL: Detected lcore 96 as core 13 on socket 1 00:04:30.877 EAL: Detected lcore 97 as core 14 on socket 1 00:04:30.877 EAL: Detected lcore 98 as core 16 on socket 1 00:04:30.877 EAL: Detected lcore 99 as core 17 on socket 1 00:04:30.877 EAL: Detected lcore 100 as core 18 on socket 1 00:04:30.877 EAL: Detected lcore 101 as core 19 on socket 1 00:04:30.877 EAL: Detected lcore 102 as core 20 on socket 1 00:04:30.877 EAL: Detected lcore 103 as core 21 on socket 1 00:04:30.877 EAL: Detected lcore 104 as core 22 on socket 1 00:04:30.877 EAL: Detected lcore 105 as core 24 on socket 1 00:04:30.877 EAL: Detected lcore 106 as core 25 on socket 1 00:04:30.877 EAL: Detected lcore 107 as core 26 on socket 1 00:04:30.877 EAL: Detected lcore 108 as core 27 on socket 1 00:04:30.877 EAL: Detected lcore 109 as core 28 on socket 1 00:04:30.877 EAL: Detected lcore 110 as core 29 on socket 1 00:04:30.877 EAL: Detected lcore 111 as core 30 on socket 1 00:04:30.877 EAL: Maximum logical cores by configuration: 128 00:04:30.877 EAL: Detected CPU lcores: 112 00:04:30.877 EAL: Detected NUMA nodes: 2 00:04:30.877 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:30.877 EAL: Detected shared linkage of DPDK 00:04:30.877 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:30.877 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:30.877 EAL: Registered [vdev] bus. 00:04:30.877 EAL: bus.vdev log level changed from disabled to notice 00:04:30.877 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:30.877 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:30.877 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:30.877 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:30.877 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:30.877 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:30.877 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:30.877 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:30.877 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.877 EAL: No shared files mode enabled, IPC is disabled 00:04:30.877 EAL: Bus pci wants IOVA as 'DC' 00:04:30.877 EAL: Bus vdev wants IOVA as 'DC' 00:04:30.877 EAL: Buses did not request a specific IOVA mode. 00:04:30.877 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:30.877 EAL: Selected IOVA mode 'VA' 00:04:30.877 EAL: Probing VFIO support... 00:04:30.877 EAL: IOMMU type 1 (Type 1) is supported 00:04:30.877 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:30.877 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:30.877 EAL: VFIO support initialized 00:04:30.877 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.877 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.877 EAL: Setting up physically contiguous memory... 00:04:30.877 EAL: Setting maximum number of open files to 524288 00:04:30.877 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.877 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:30.877 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.877 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.877 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.877 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.877 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.877 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.877 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.877 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.877 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.877 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.877 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.877 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.877 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.877 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.877 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.877 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.877 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.877 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.877 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.877 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.877 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.877 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.877 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.877 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.877 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.877 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:30.877 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.877 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:30.877 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.877 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.877 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:30.877 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:30.877 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.877 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:30.877 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.877 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.877 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:30.877 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:30.877 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.877 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:30.877 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.877 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.877 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:30.877 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:30.877 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.877 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:30.877 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.877 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.877 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:30.877 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:30.877 EAL: Hugepages will be freed exactly as allocated. 00:04:30.877 EAL: No shared files mode enabled, IPC is disabled 00:04:30.877 EAL: No shared files mode enabled, IPC is disabled 00:04:30.877 EAL: TSC frequency is ~2500000 KHz 00:04:30.877 EAL: Main lcore 0 is ready (tid=7f197b23fa00;cpuset=[0]) 00:04:30.877 EAL: Trying to obtain current memory policy. 00:04:30.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.877 EAL: Restoring previous memory policy: 0 00:04:30.877 EAL: request: mp_malloc_sync 00:04:30.877 EAL: No shared files mode enabled, IPC is disabled 00:04:30.877 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.877 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:04:30.877 EAL: probe driver: 8086:37d2 net_i40e 00:04:30.877 EAL: Not managed by a supported kernel driver, skipped 00:04:30.877 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:04:30.877 EAL: probe driver: 8086:37d2 net_i40e 00:04:30.877 EAL: Not managed by a supported kernel driver, skipped 00:04:30.877 EAL: No shared files mode enabled, IPC is disabled 00:04:30.877 EAL: No shared files mode enabled, IPC is disabled 00:04:30.877 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.877 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.877 00:04:30.877 00:04:30.877 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.877 http://cunit.sourceforge.net/ 00:04:30.877 00:04:30.877 00:04:30.877 Suite: components_suite 00:04:30.878 Test: vtophys_malloc_test ...passed 00:04:30.878 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:30.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.878 EAL: Restoring previous memory policy: 4 00:04:30.878 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.878 EAL: request: mp_malloc_sync 00:04:30.878 EAL: No shared files mode enabled, IPC is disabled 00:04:30.878 EAL: Heap on socket 0 was expanded by 4MB 00:04:30.878 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.878 EAL: request: mp_malloc_sync 00:04:30.878 EAL: No shared files mode enabled, IPC is disabled 00:04:30.878 EAL: Heap on socket 0 was shrunk by 4MB 00:04:30.878 EAL: Trying to obtain current memory policy. 00:04:30.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.878 EAL: Restoring previous memory policy: 4 00:04:30.878 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.878 EAL: request: mp_malloc_sync 00:04:30.878 EAL: No shared files mode enabled, IPC is disabled 00:04:30.878 EAL: Heap on socket 0 was expanded by 6MB 00:04:30.878 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.878 EAL: request: mp_malloc_sync 00:04:30.878 EAL: No shared files mode enabled, IPC is disabled 00:04:30.878 EAL: Heap on socket 0 was shrunk by 6MB 00:04:30.878 EAL: Trying to obtain current memory policy. 00:04:30.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.878 EAL: Restoring previous memory policy: 4 00:04:30.878 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.878 EAL: request: mp_malloc_sync 00:04:30.878 EAL: No shared files mode enabled, IPC is disabled 00:04:30.878 EAL: Heap on socket 0 was expanded by 10MB 00:04:30.878 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.878 EAL: request: mp_malloc_sync 00:04:30.878 EAL: No shared files mode enabled, IPC is disabled 00:04:30.878 EAL: Heap on socket 0 was shrunk by 10MB 00:04:30.878 EAL: Trying to obtain current memory policy. 00:04:30.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.878 EAL: Restoring previous memory policy: 4 00:04:30.878 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.878 EAL: request: mp_malloc_sync 00:04:30.878 EAL: No shared files mode enabled, IPC is disabled 00:04:30.878 EAL: Heap on socket 0 was expanded by 18MB 00:04:31.137 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.137 EAL: request: mp_malloc_sync 00:04:31.137 EAL: No shared files mode enabled, IPC is disabled 00:04:31.137 EAL: Heap on socket 0 was shrunk by 18MB 00:04:31.137 EAL: Trying to obtain current memory policy. 00:04:31.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.137 EAL: Restoring previous memory policy: 4 00:04:31.137 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.137 EAL: request: mp_malloc_sync 00:04:31.137 EAL: No shared files mode enabled, IPC is disabled 00:04:31.137 EAL: Heap on socket 0 was expanded by 34MB 00:04:31.137 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.137 EAL: request: mp_malloc_sync 00:04:31.137 EAL: No shared files mode enabled, IPC is disabled 00:04:31.137 EAL: Heap on socket 0 was shrunk by 34MB 00:04:31.137 EAL: Trying to obtain current memory policy. 00:04:31.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.137 EAL: Restoring previous memory policy: 4 00:04:31.137 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.137 EAL: request: mp_malloc_sync 00:04:31.137 EAL: No shared files mode enabled, IPC is disabled 00:04:31.137 EAL: Heap on socket 0 was expanded by 66MB 00:04:31.137 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.137 EAL: request: mp_malloc_sync 00:04:31.137 EAL: No shared files mode enabled, IPC is disabled 00:04:31.137 EAL: Heap on socket 0 was shrunk by 66MB 00:04:31.137 EAL: Trying to obtain current memory policy. 00:04:31.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.137 EAL: Restoring previous memory policy: 4 00:04:31.137 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.137 EAL: request: mp_malloc_sync 00:04:31.137 EAL: No shared files mode enabled, IPC is disabled 00:04:31.137 EAL: Heap on socket 0 was expanded by 130MB 00:04:31.137 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.137 EAL: request: mp_malloc_sync 00:04:31.137 EAL: No shared files mode enabled, IPC is disabled 00:04:31.137 EAL: Heap on socket 0 was shrunk by 130MB 00:04:31.137 EAL: Trying to obtain current memory policy. 00:04:31.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.137 EAL: Restoring previous memory policy: 4 00:04:31.137 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.137 EAL: request: mp_malloc_sync 00:04:31.137 EAL: No shared files mode enabled, IPC is disabled 00:04:31.137 EAL: Heap on socket 0 was expanded by 258MB 00:04:31.137 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.137 EAL: request: mp_malloc_sync 00:04:31.137 EAL: No shared files mode enabled, IPC is disabled 00:04:31.137 EAL: Heap on socket 0 was shrunk by 258MB 00:04:31.137 EAL: Trying to obtain current memory policy. 00:04:31.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.396 EAL: Restoring previous memory policy: 4 00:04:31.396 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.396 EAL: request: mp_malloc_sync 00:04:31.396 EAL: No shared files mode enabled, IPC is disabled 00:04:31.396 EAL: Heap on socket 0 was expanded by 514MB 00:04:31.396 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.396 EAL: request: mp_malloc_sync 00:04:31.396 EAL: No shared files mode enabled, IPC is disabled 00:04:31.396 EAL: Heap on socket 0 was shrunk by 514MB 00:04:31.396 EAL: Trying to obtain current memory policy. 00:04:31.396 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.655 EAL: Restoring previous memory policy: 4 00:04:31.655 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.655 EAL: request: mp_malloc_sync 00:04:31.655 EAL: No shared files mode enabled, IPC is disabled 00:04:31.655 EAL: Heap on socket 0 was expanded by 1026MB 00:04:31.915 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.915 EAL: request: mp_malloc_sync 00:04:31.915 EAL: No shared files mode enabled, IPC is disabled 00:04:31.915 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.915 passed 00:04:31.915 00:04:31.915 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.915 suites 1 1 n/a 0 0 00:04:31.915 tests 2 2 2 0 0 00:04:31.915 asserts 497 497 497 0 n/a 00:04:31.915 00:04:31.915 Elapsed time = 0.964 seconds 00:04:31.915 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.915 EAL: request: mp_malloc_sync 00:04:31.915 EAL: No shared files mode enabled, IPC is disabled 00:04:31.915 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.915 EAL: No shared files mode enabled, IPC is disabled 00:04:31.915 EAL: No shared files mode enabled, IPC is disabled 00:04:31.915 EAL: No shared files mode enabled, IPC is disabled 00:04:31.915 00:04:31.915 real 0m1.104s 00:04:31.915 user 0m0.634s 00:04:31.915 sys 0m0.437s 00:04:31.915 21:34:04 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.915 21:34:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:31.915 ************************************ 00:04:31.915 END TEST env_vtophys 00:04:31.915 ************************************ 00:04:31.915 21:34:04 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:31.915 21:34:04 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.915 21:34:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.915 21:34:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.174 ************************************ 00:04:32.174 START TEST env_pci 00:04:32.174 ************************************ 00:04:32.174 21:34:04 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:32.174 00:04:32.174 00:04:32.174 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.174 http://cunit.sourceforge.net/ 00:04:32.174 00:04:32.174 00:04:32.174 Suite: pci 00:04:32.174 Test: pci_hook ...[2024-11-29 21:34:04.201726] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2837368 has claimed it 00:04:32.174 EAL: Cannot find device (10000:00:01.0) 00:04:32.174 EAL: Failed to attach device on primary process 00:04:32.174 passed 00:04:32.174 00:04:32.174 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.174 suites 1 1 n/a 0 0 00:04:32.174 tests 1 1 1 0 0 00:04:32.174 asserts 25 25 25 0 n/a 00:04:32.174 00:04:32.174 Elapsed time = 0.037 seconds 00:04:32.174 00:04:32.174 real 0m0.057s 00:04:32.174 user 0m0.013s 00:04:32.174 sys 0m0.043s 00:04:32.174 21:34:04 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.174 21:34:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:32.174 ************************************ 00:04:32.174 END TEST env_pci 00:04:32.174 ************************************ 00:04:32.174 21:34:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:32.174 21:34:04 env -- env/env.sh@15 -- # uname 00:04:32.174 21:34:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:32.174 21:34:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:32.174 21:34:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.174 21:34:04 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:32.174 21:34:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.174 21:34:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.174 ************************************ 00:04:32.174 START TEST env_dpdk_post_init 00:04:32.174 ************************************ 00:04:32.174 21:34:04 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.174 EAL: Detected CPU lcores: 112 00:04:32.174 EAL: Detected NUMA nodes: 2 00:04:32.174 EAL: Detected shared linkage of DPDK 00:04:32.174 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.174 EAL: Selected IOVA mode 'VA' 00:04:32.174 EAL: VFIO support initialized 00:04:32.174 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.434 EAL: Using IOMMU type 1 (Type 1) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:32.434 EAL: Ignore mapping IO port bar(1) 00:04:32.434 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:33.384 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:37.575 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:37.575 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:37.576 Starting DPDK initialization... 00:04:37.576 Starting SPDK post initialization... 00:04:37.576 SPDK NVMe probe 00:04:37.576 Attaching to 0000:d8:00.0 00:04:37.576 Attached to 0000:d8:00.0 00:04:37.576 Cleaning up... 00:04:37.576 00:04:37.576 real 0m5.330s 00:04:37.576 user 0m3.956s 00:04:37.576 sys 0m0.430s 00:04:37.576 21:34:09 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.576 21:34:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.576 ************************************ 00:04:37.576 END TEST env_dpdk_post_init 00:04:37.576 ************************************ 00:04:37.576 21:34:09 env -- env/env.sh@26 -- # uname 00:04:37.576 21:34:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:37.576 21:34:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.576 21:34:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.576 21:34:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.576 21:34:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.576 ************************************ 00:04:37.576 START TEST env_mem_callbacks 00:04:37.576 ************************************ 00:04:37.576 21:34:09 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.576 EAL: Detected CPU lcores: 112 00:04:37.576 EAL: Detected NUMA nodes: 2 00:04:37.576 EAL: Detected shared linkage of DPDK 00:04:37.576 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.576 EAL: Selected IOVA mode 'VA' 00:04:37.576 EAL: VFIO support initialized 00:04:37.576 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.576 00:04:37.576 00:04:37.576 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.576 http://cunit.sourceforge.net/ 00:04:37.576 00:04:37.576 00:04:37.576 Suite: memory 00:04:37.576 Test: test ... 00:04:37.576 register 0x200000200000 2097152 00:04:37.576 malloc 3145728 00:04:37.576 register 0x200000400000 4194304 00:04:37.576 buf 0x200000500000 len 3145728 PASSED 00:04:37.576 malloc 64 00:04:37.576 buf 0x2000004fff40 len 64 PASSED 00:04:37.576 malloc 4194304 00:04:37.576 register 0x200000800000 6291456 00:04:37.576 buf 0x200000a00000 len 4194304 PASSED 00:04:37.576 free 0x200000500000 3145728 00:04:37.576 free 0x2000004fff40 64 00:04:37.576 unregister 0x200000400000 4194304 PASSED 00:04:37.576 free 0x200000a00000 4194304 00:04:37.576 unregister 0x200000800000 6291456 PASSED 00:04:37.576 malloc 8388608 00:04:37.576 register 0x200000400000 10485760 00:04:37.576 buf 0x200000600000 len 8388608 PASSED 00:04:37.576 free 0x200000600000 8388608 00:04:37.576 unregister 0x200000400000 10485760 PASSED 00:04:37.576 passed 00:04:37.576 00:04:37.576 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.576 suites 1 1 n/a 0 0 00:04:37.576 tests 1 1 1 0 0 00:04:37.576 asserts 15 15 15 0 n/a 00:04:37.576 00:04:37.576 Elapsed time = 0.006 seconds 00:04:37.576 00:04:37.576 real 0m0.066s 00:04:37.576 user 0m0.020s 00:04:37.576 sys 0m0.046s 00:04:37.576 21:34:09 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.576 21:34:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:37.576 ************************************ 00:04:37.576 END TEST env_mem_callbacks 00:04:37.576 ************************************ 00:04:37.835 00:04:37.835 real 0m7.278s 00:04:37.835 user 0m5.003s 00:04:37.835 sys 0m1.340s 00:04:37.835 21:34:09 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.835 21:34:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.835 ************************************ 00:04:37.835 END TEST env 00:04:37.835 ************************************ 00:04:37.835 21:34:09 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:37.835 21:34:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.835 21:34:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.835 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:04:37.836 ************************************ 00:04:37.836 START TEST rpc 00:04:37.836 ************************************ 00:04:37.836 21:34:09 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:37.836 * Looking for test storage... 00:04:37.836 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:37.836 21:34:10 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:37.836 21:34:10 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:37.836 21:34:10 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:38.095 21:34:10 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:38.095 21:34:10 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.095 21:34:10 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.095 21:34:10 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.095 21:34:10 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.095 21:34:10 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.095 21:34:10 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.095 21:34:10 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.095 21:34:10 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.095 21:34:10 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.095 21:34:10 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.095 21:34:10 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.095 21:34:10 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.095 21:34:10 rpc -- scripts/common.sh@345 -- # : 1 00:04:38.095 21:34:10 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.095 21:34:10 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.095 21:34:10 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.095 21:34:10 rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.095 21:34:10 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.095 21:34:10 rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.095 21:34:10 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.095 21:34:10 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.095 21:34:10 rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.095 21:34:10 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.095 21:34:10 rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.095 21:34:10 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.095 21:34:10 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.095 21:34:10 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.095 21:34:10 rpc -- scripts/common.sh@368 -- # return 0 00:04:38.095 21:34:10 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.095 21:34:10 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:38.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.095 --rc genhtml_branch_coverage=1 00:04:38.096 --rc genhtml_function_coverage=1 00:04:38.096 --rc genhtml_legend=1 00:04:38.096 --rc geninfo_all_blocks=1 00:04:38.096 --rc geninfo_unexecuted_blocks=1 00:04:38.096 00:04:38.096 ' 00:04:38.096 21:34:10 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:38.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.096 --rc genhtml_branch_coverage=1 00:04:38.096 --rc genhtml_function_coverage=1 00:04:38.096 --rc genhtml_legend=1 00:04:38.096 --rc geninfo_all_blocks=1 00:04:38.096 --rc geninfo_unexecuted_blocks=1 00:04:38.096 00:04:38.096 ' 00:04:38.096 21:34:10 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:38.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.096 --rc genhtml_branch_coverage=1 00:04:38.096 --rc genhtml_function_coverage=1 00:04:38.096 --rc genhtml_legend=1 00:04:38.096 --rc geninfo_all_blocks=1 00:04:38.096 --rc geninfo_unexecuted_blocks=1 00:04:38.096 00:04:38.096 ' 00:04:38.096 21:34:10 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:38.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.096 --rc genhtml_branch_coverage=1 00:04:38.096 --rc genhtml_function_coverage=1 00:04:38.096 --rc genhtml_legend=1 00:04:38.096 --rc geninfo_all_blocks=1 00:04:38.096 --rc geninfo_unexecuted_blocks=1 00:04:38.096 00:04:38.096 ' 00:04:38.096 21:34:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2838488 00:04:38.096 21:34:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.096 21:34:10 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:38.096 21:34:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2838488 00:04:38.096 21:34:10 rpc -- common/autotest_common.sh@831 -- # '[' -z 2838488 ']' 00:04:38.096 21:34:10 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.096 21:34:10 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.096 21:34:10 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.096 21:34:10 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.096 21:34:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.096 [2024-11-29 21:34:10.194826] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:04:38.096 [2024-11-29 21:34:10.194885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838488 ] 00:04:38.096 [2024-11-29 21:34:10.265246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.096 [2024-11-29 21:34:10.304473] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:38.096 [2024-11-29 21:34:10.304522] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2838488' to capture a snapshot of events at runtime. 00:04:38.096 [2024-11-29 21:34:10.304532] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:38.096 [2024-11-29 21:34:10.304540] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:38.096 [2024-11-29 21:34:10.304548] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2838488 for offline analysis/debug. 00:04:38.096 [2024-11-29 21:34:10.304578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.355 21:34:10 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.355 21:34:10 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:38.355 21:34:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:38.355 21:34:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:38.355 21:34:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:38.355 21:34:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:38.355 21:34:10 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.355 21:34:10 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.355 21:34:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.355 ************************************ 00:04:38.355 START TEST rpc_integrity 00:04:38.355 ************************************ 00:04:38.355 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:38.355 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.355 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.355 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.355 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.355 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.355 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.355 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.355 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.355 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.355 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.355 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.355 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:38.613 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.613 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.613 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.613 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.613 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.613 { 00:04:38.613 "name": "Malloc0", 00:04:38.613 "aliases": [ 00:04:38.613 "f8596ec0-fd0e-4cb4-adbc-34259bb5af21" 00:04:38.613 ], 00:04:38.613 "product_name": "Malloc disk", 00:04:38.613 "block_size": 512, 00:04:38.613 "num_blocks": 16384, 00:04:38.613 "uuid": "f8596ec0-fd0e-4cb4-adbc-34259bb5af21", 00:04:38.613 "assigned_rate_limits": { 00:04:38.613 "rw_ios_per_sec": 0, 00:04:38.613 "rw_mbytes_per_sec": 0, 00:04:38.613 "r_mbytes_per_sec": 0, 00:04:38.613 "w_mbytes_per_sec": 0 00:04:38.613 }, 00:04:38.613 "claimed": false, 00:04:38.613 "zoned": false, 00:04:38.613 "supported_io_types": { 00:04:38.613 "read": true, 00:04:38.614 "write": true, 00:04:38.614 "unmap": true, 00:04:38.614 "flush": true, 00:04:38.614 "reset": true, 00:04:38.614 "nvme_admin": false, 00:04:38.614 "nvme_io": false, 00:04:38.614 "nvme_io_md": false, 00:04:38.614 "write_zeroes": true, 00:04:38.614 "zcopy": true, 00:04:38.614 "get_zone_info": false, 00:04:38.614 "zone_management": false, 00:04:38.614 "zone_append": false, 00:04:38.614 "compare": false, 00:04:38.614 "compare_and_write": false, 00:04:38.614 "abort": true, 00:04:38.614 "seek_hole": false, 00:04:38.614 "seek_data": false, 00:04:38.614 "copy": true, 00:04:38.614 "nvme_iov_md": false 00:04:38.614 }, 00:04:38.614 "memory_domains": [ 00:04:38.614 { 00:04:38.614 "dma_device_id": "system", 00:04:38.614 "dma_device_type": 1 00:04:38.614 }, 00:04:38.614 { 00:04:38.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.614 "dma_device_type": 2 00:04:38.614 } 00:04:38.614 ], 00:04:38.614 "driver_specific": {} 00:04:38.614 } 00:04:38.614 ]' 00:04:38.614 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.614 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.614 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.614 [2024-11-29 21:34:10.677021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:38.614 [2024-11-29 21:34:10.677053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.614 [2024-11-29 21:34:10.677066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2597640 00:04:38.614 [2024-11-29 21:34:10.677075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.614 [2024-11-29 21:34:10.678151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.614 [2024-11-29 21:34:10.678175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.614 Passthru0 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.614 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.614 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.614 { 00:04:38.614 "name": "Malloc0", 00:04:38.614 "aliases": [ 00:04:38.614 "f8596ec0-fd0e-4cb4-adbc-34259bb5af21" 00:04:38.614 ], 00:04:38.614 "product_name": "Malloc disk", 00:04:38.614 "block_size": 512, 00:04:38.614 "num_blocks": 16384, 00:04:38.614 "uuid": "f8596ec0-fd0e-4cb4-adbc-34259bb5af21", 00:04:38.614 "assigned_rate_limits": { 00:04:38.614 "rw_ios_per_sec": 0, 00:04:38.614 "rw_mbytes_per_sec": 0, 00:04:38.614 "r_mbytes_per_sec": 0, 00:04:38.614 "w_mbytes_per_sec": 0 00:04:38.614 }, 00:04:38.614 "claimed": true, 00:04:38.614 "claim_type": "exclusive_write", 00:04:38.614 "zoned": false, 00:04:38.614 "supported_io_types": { 00:04:38.614 "read": true, 00:04:38.614 "write": true, 00:04:38.614 "unmap": true, 00:04:38.614 "flush": true, 00:04:38.614 "reset": true, 00:04:38.614 "nvme_admin": false, 00:04:38.614 "nvme_io": false, 00:04:38.614 "nvme_io_md": false, 00:04:38.614 "write_zeroes": true, 00:04:38.614 "zcopy": true, 00:04:38.614 "get_zone_info": false, 00:04:38.614 "zone_management": false, 00:04:38.614 "zone_append": false, 00:04:38.614 "compare": false, 00:04:38.614 "compare_and_write": false, 00:04:38.614 "abort": true, 00:04:38.614 "seek_hole": false, 00:04:38.614 "seek_data": false, 00:04:38.614 "copy": true, 00:04:38.614 "nvme_iov_md": false 00:04:38.614 }, 00:04:38.614 "memory_domains": [ 00:04:38.614 { 00:04:38.614 "dma_device_id": "system", 00:04:38.614 "dma_device_type": 1 00:04:38.614 }, 00:04:38.614 { 00:04:38.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.614 "dma_device_type": 2 00:04:38.614 } 00:04:38.614 ], 00:04:38.614 "driver_specific": {} 00:04:38.614 }, 00:04:38.614 { 00:04:38.614 "name": "Passthru0", 00:04:38.614 "aliases": [ 00:04:38.614 "2137b356-d3c4-51b9-994a-41581514b422" 00:04:38.614 ], 00:04:38.614 "product_name": "passthru", 00:04:38.614 "block_size": 512, 00:04:38.614 "num_blocks": 16384, 00:04:38.614 "uuid": "2137b356-d3c4-51b9-994a-41581514b422", 00:04:38.614 "assigned_rate_limits": { 00:04:38.614 "rw_ios_per_sec": 0, 00:04:38.614 "rw_mbytes_per_sec": 0, 00:04:38.614 "r_mbytes_per_sec": 0, 00:04:38.614 "w_mbytes_per_sec": 0 00:04:38.614 }, 00:04:38.614 "claimed": false, 00:04:38.614 "zoned": false, 00:04:38.614 "supported_io_types": { 00:04:38.614 "read": true, 00:04:38.614 "write": true, 00:04:38.614 "unmap": true, 00:04:38.614 "flush": true, 00:04:38.614 "reset": true, 00:04:38.614 "nvme_admin": false, 00:04:38.614 "nvme_io": false, 00:04:38.614 "nvme_io_md": false, 00:04:38.614 "write_zeroes": true, 00:04:38.614 "zcopy": true, 00:04:38.614 "get_zone_info": false, 00:04:38.614 "zone_management": false, 00:04:38.614 "zone_append": false, 00:04:38.614 "compare": false, 00:04:38.614 "compare_and_write": false, 00:04:38.614 "abort": true, 00:04:38.614 "seek_hole": false, 00:04:38.614 "seek_data": false, 00:04:38.614 "copy": true, 00:04:38.614 "nvme_iov_md": false 00:04:38.614 }, 00:04:38.614 "memory_domains": [ 00:04:38.614 { 00:04:38.614 "dma_device_id": "system", 00:04:38.614 "dma_device_type": 1 00:04:38.614 }, 00:04:38.614 { 00:04:38.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.614 "dma_device_type": 2 00:04:38.614 } 00:04:38.614 ], 00:04:38.614 "driver_specific": { 00:04:38.614 "passthru": { 00:04:38.614 "name": "Passthru0", 00:04:38.614 "base_bdev_name": "Malloc0" 00:04:38.614 } 00:04:38.614 } 00:04:38.614 } 00:04:38.614 ]' 00:04:38.614 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.614 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.614 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.614 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.614 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.614 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.614 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.614 21:34:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.614 00:04:38.614 real 0m0.288s 00:04:38.614 user 0m0.172s 00:04:38.614 sys 0m0.057s 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.614 21:34:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.614 ************************************ 00:04:38.614 END TEST rpc_integrity 00:04:38.614 ************************************ 00:04:38.873 21:34:10 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:38.873 21:34:10 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.873 21:34:10 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.873 21:34:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.873 ************************************ 00:04:38.873 START TEST rpc_plugins 00:04:38.873 ************************************ 00:04:38.873 21:34:10 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:38.873 21:34:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:38.873 21:34:10 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.873 21:34:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.873 21:34:10 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.873 21:34:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:38.873 21:34:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:38.873 21:34:10 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.873 21:34:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.873 21:34:10 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.873 21:34:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:38.873 { 00:04:38.873 "name": "Malloc1", 00:04:38.873 "aliases": [ 00:04:38.873 "23584f5b-e697-4058-b0ae-2642aecf72b0" 00:04:38.873 ], 00:04:38.873 "product_name": "Malloc disk", 00:04:38.873 "block_size": 4096, 00:04:38.873 "num_blocks": 256, 00:04:38.873 "uuid": "23584f5b-e697-4058-b0ae-2642aecf72b0", 00:04:38.873 "assigned_rate_limits": { 00:04:38.873 "rw_ios_per_sec": 0, 00:04:38.873 "rw_mbytes_per_sec": 0, 00:04:38.873 "r_mbytes_per_sec": 0, 00:04:38.873 "w_mbytes_per_sec": 0 00:04:38.873 }, 00:04:38.873 "claimed": false, 00:04:38.873 "zoned": false, 00:04:38.873 "supported_io_types": { 00:04:38.873 "read": true, 00:04:38.873 "write": true, 00:04:38.873 "unmap": true, 00:04:38.873 "flush": true, 00:04:38.873 "reset": true, 00:04:38.873 "nvme_admin": false, 00:04:38.873 "nvme_io": false, 00:04:38.873 "nvme_io_md": false, 00:04:38.873 "write_zeroes": true, 00:04:38.873 "zcopy": true, 00:04:38.873 "get_zone_info": false, 00:04:38.873 "zone_management": false, 00:04:38.873 "zone_append": false, 00:04:38.873 "compare": false, 00:04:38.873 "compare_and_write": false, 00:04:38.873 "abort": true, 00:04:38.873 "seek_hole": false, 00:04:38.873 "seek_data": false, 00:04:38.873 "copy": true, 00:04:38.873 "nvme_iov_md": false 00:04:38.873 }, 00:04:38.873 "memory_domains": [ 00:04:38.873 { 00:04:38.873 "dma_device_id": "system", 00:04:38.873 "dma_device_type": 1 00:04:38.873 }, 00:04:38.873 { 00:04:38.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.873 "dma_device_type": 2 00:04:38.873 } 00:04:38.873 ], 00:04:38.873 "driver_specific": {} 00:04:38.873 } 00:04:38.873 ]' 00:04:38.873 21:34:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:38.873 21:34:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:38.873 21:34:10 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:38.873 21:34:10 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.873 21:34:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.873 21:34:10 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.873 21:34:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:38.873 21:34:10 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.873 21:34:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.873 21:34:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.873 21:34:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:38.873 21:34:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:38.873 21:34:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:38.873 00:04:38.873 real 0m0.138s 00:04:38.873 user 0m0.089s 00:04:38.873 sys 0m0.024s 00:04:38.873 21:34:11 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.873 21:34:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.873 ************************************ 00:04:38.873 END TEST rpc_plugins 00:04:38.873 ************************************ 00:04:38.873 21:34:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:38.873 21:34:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.873 21:34:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.873 21:34:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.132 ************************************ 00:04:39.132 START TEST rpc_trace_cmd_test 00:04:39.132 ************************************ 00:04:39.132 21:34:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:39.132 21:34:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:39.132 21:34:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:39.132 21:34:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.132 21:34:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.132 21:34:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.132 21:34:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:39.132 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2838488", 00:04:39.132 "tpoint_group_mask": "0x8", 00:04:39.132 "iscsi_conn": { 00:04:39.132 "mask": "0x2", 00:04:39.132 "tpoint_mask": "0x0" 00:04:39.132 }, 00:04:39.132 "scsi": { 00:04:39.132 "mask": "0x4", 00:04:39.132 "tpoint_mask": "0x0" 00:04:39.132 }, 00:04:39.132 "bdev": { 00:04:39.132 "mask": "0x8", 00:04:39.132 "tpoint_mask": "0xffffffffffffffff" 00:04:39.132 }, 00:04:39.132 "nvmf_rdma": { 00:04:39.132 "mask": "0x10", 00:04:39.132 "tpoint_mask": "0x0" 00:04:39.132 }, 00:04:39.132 "nvmf_tcp": { 00:04:39.132 "mask": "0x20", 00:04:39.132 "tpoint_mask": "0x0" 00:04:39.132 }, 00:04:39.132 "ftl": { 00:04:39.132 "mask": "0x40", 00:04:39.132 "tpoint_mask": "0x0" 00:04:39.132 }, 00:04:39.132 "blobfs": { 00:04:39.132 "mask": "0x80", 00:04:39.132 "tpoint_mask": "0x0" 00:04:39.132 }, 00:04:39.132 "dsa": { 00:04:39.132 "mask": "0x200", 00:04:39.132 "tpoint_mask": "0x0" 00:04:39.132 }, 00:04:39.132 "thread": { 00:04:39.132 "mask": "0x400", 00:04:39.132 "tpoint_mask": "0x0" 00:04:39.132 }, 00:04:39.132 "nvme_pcie": { 00:04:39.132 "mask": "0x800", 00:04:39.132 "tpoint_mask": "0x0" 00:04:39.132 }, 00:04:39.132 "iaa": { 00:04:39.133 "mask": "0x1000", 00:04:39.133 "tpoint_mask": "0x0" 00:04:39.133 }, 00:04:39.133 "nvme_tcp": { 00:04:39.133 "mask": "0x2000", 00:04:39.133 "tpoint_mask": "0x0" 00:04:39.133 }, 00:04:39.133 "bdev_nvme": { 00:04:39.133 "mask": "0x4000", 00:04:39.133 "tpoint_mask": "0x0" 00:04:39.133 }, 00:04:39.133 "sock": { 00:04:39.133 "mask": "0x8000", 00:04:39.133 "tpoint_mask": "0x0" 00:04:39.133 }, 00:04:39.133 "blob": { 00:04:39.133 "mask": "0x10000", 00:04:39.133 "tpoint_mask": "0x0" 00:04:39.133 }, 00:04:39.133 "bdev_raid": { 00:04:39.133 "mask": "0x20000", 00:04:39.133 "tpoint_mask": "0x0" 00:04:39.133 } 00:04:39.133 }' 00:04:39.133 21:34:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:39.133 21:34:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:39.133 21:34:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:39.133 21:34:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:39.133 21:34:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:39.133 21:34:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:39.133 21:34:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:39.133 21:34:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:39.133 21:34:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:39.133 21:34:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:39.133 00:04:39.133 real 0m0.217s 00:04:39.133 user 0m0.181s 00:04:39.133 sys 0m0.029s 00:04:39.133 21:34:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.133 21:34:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.133 ************************************ 00:04:39.133 END TEST rpc_trace_cmd_test 00:04:39.133 ************************************ 00:04:39.392 21:34:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:39.392 21:34:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:39.392 21:34:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:39.392 21:34:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.392 21:34:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.392 21:34:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.392 ************************************ 00:04:39.392 START TEST rpc_daemon_integrity 00:04:39.392 ************************************ 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.392 { 00:04:39.392 "name": "Malloc2", 00:04:39.392 "aliases": [ 00:04:39.392 "5eb1d5ba-910a-4aad-b05b-30494f4710f4" 00:04:39.392 ], 00:04:39.392 "product_name": "Malloc disk", 00:04:39.392 "block_size": 512, 00:04:39.392 "num_blocks": 16384, 00:04:39.392 "uuid": "5eb1d5ba-910a-4aad-b05b-30494f4710f4", 00:04:39.392 "assigned_rate_limits": { 00:04:39.392 "rw_ios_per_sec": 0, 00:04:39.392 "rw_mbytes_per_sec": 0, 00:04:39.392 "r_mbytes_per_sec": 0, 00:04:39.392 "w_mbytes_per_sec": 0 00:04:39.392 }, 00:04:39.392 "claimed": false, 00:04:39.392 "zoned": false, 00:04:39.392 "supported_io_types": { 00:04:39.392 "read": true, 00:04:39.392 "write": true, 00:04:39.392 "unmap": true, 00:04:39.392 "flush": true, 00:04:39.392 "reset": true, 00:04:39.392 "nvme_admin": false, 00:04:39.392 "nvme_io": false, 00:04:39.392 "nvme_io_md": false, 00:04:39.392 "write_zeroes": true, 00:04:39.392 "zcopy": true, 00:04:39.392 "get_zone_info": false, 00:04:39.392 "zone_management": false, 00:04:39.392 "zone_append": false, 00:04:39.392 "compare": false, 00:04:39.392 "compare_and_write": false, 00:04:39.392 "abort": true, 00:04:39.392 "seek_hole": false, 00:04:39.392 "seek_data": false, 00:04:39.392 "copy": true, 00:04:39.392 "nvme_iov_md": false 00:04:39.392 }, 00:04:39.392 "memory_domains": [ 00:04:39.392 { 00:04:39.392 "dma_device_id": "system", 00:04:39.392 "dma_device_type": 1 00:04:39.392 }, 00:04:39.392 { 00:04:39.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.392 "dma_device_type": 2 00:04:39.392 } 00:04:39.392 ], 00:04:39.392 "driver_specific": {} 00:04:39.392 } 00:04:39.392 ]' 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.392 [2024-11-29 21:34:11.563433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:39.392 [2024-11-29 21:34:11.563461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.392 [2024-11-29 21:34:11.563474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2584af0 00:04:39.392 [2024-11-29 21:34:11.563482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.392 [2024-11-29 21:34:11.564406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.392 [2024-11-29 21:34:11.564428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.392 Passthru0 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.392 { 00:04:39.392 "name": "Malloc2", 00:04:39.392 "aliases": [ 00:04:39.392 "5eb1d5ba-910a-4aad-b05b-30494f4710f4" 00:04:39.392 ], 00:04:39.392 "product_name": "Malloc disk", 00:04:39.392 "block_size": 512, 00:04:39.392 "num_blocks": 16384, 00:04:39.392 "uuid": "5eb1d5ba-910a-4aad-b05b-30494f4710f4", 00:04:39.392 "assigned_rate_limits": { 00:04:39.392 "rw_ios_per_sec": 0, 00:04:39.392 "rw_mbytes_per_sec": 0, 00:04:39.392 "r_mbytes_per_sec": 0, 00:04:39.392 "w_mbytes_per_sec": 0 00:04:39.392 }, 00:04:39.392 "claimed": true, 00:04:39.392 "claim_type": "exclusive_write", 00:04:39.392 "zoned": false, 00:04:39.392 "supported_io_types": { 00:04:39.392 "read": true, 00:04:39.392 "write": true, 00:04:39.392 "unmap": true, 00:04:39.392 "flush": true, 00:04:39.392 "reset": true, 00:04:39.392 "nvme_admin": false, 00:04:39.392 "nvme_io": false, 00:04:39.392 "nvme_io_md": false, 00:04:39.392 "write_zeroes": true, 00:04:39.392 "zcopy": true, 00:04:39.392 "get_zone_info": false, 00:04:39.392 "zone_management": false, 00:04:39.392 "zone_append": false, 00:04:39.392 "compare": false, 00:04:39.392 "compare_and_write": false, 00:04:39.392 "abort": true, 00:04:39.392 "seek_hole": false, 00:04:39.392 "seek_data": false, 00:04:39.392 "copy": true, 00:04:39.392 "nvme_iov_md": false 00:04:39.392 }, 00:04:39.392 "memory_domains": [ 00:04:39.392 { 00:04:39.392 "dma_device_id": "system", 00:04:39.392 "dma_device_type": 1 00:04:39.392 }, 00:04:39.392 { 00:04:39.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.392 "dma_device_type": 2 00:04:39.392 } 00:04:39.392 ], 00:04:39.392 "driver_specific": {} 00:04:39.392 }, 00:04:39.392 { 00:04:39.392 "name": "Passthru0", 00:04:39.392 "aliases": [ 00:04:39.392 "c2e84ec7-6305-51ae-8181-14998c74f4fb" 00:04:39.392 ], 00:04:39.392 "product_name": "passthru", 00:04:39.392 "block_size": 512, 00:04:39.392 "num_blocks": 16384, 00:04:39.392 "uuid": "c2e84ec7-6305-51ae-8181-14998c74f4fb", 00:04:39.392 "assigned_rate_limits": { 00:04:39.392 "rw_ios_per_sec": 0, 00:04:39.392 "rw_mbytes_per_sec": 0, 00:04:39.392 "r_mbytes_per_sec": 0, 00:04:39.392 "w_mbytes_per_sec": 0 00:04:39.392 }, 00:04:39.392 "claimed": false, 00:04:39.392 "zoned": false, 00:04:39.392 "supported_io_types": { 00:04:39.392 "read": true, 00:04:39.392 "write": true, 00:04:39.392 "unmap": true, 00:04:39.392 "flush": true, 00:04:39.392 "reset": true, 00:04:39.392 "nvme_admin": false, 00:04:39.392 "nvme_io": false, 00:04:39.392 "nvme_io_md": false, 00:04:39.392 "write_zeroes": true, 00:04:39.392 "zcopy": true, 00:04:39.392 "get_zone_info": false, 00:04:39.392 "zone_management": false, 00:04:39.392 "zone_append": false, 00:04:39.392 "compare": false, 00:04:39.392 "compare_and_write": false, 00:04:39.392 "abort": true, 00:04:39.392 "seek_hole": false, 00:04:39.392 "seek_data": false, 00:04:39.392 "copy": true, 00:04:39.392 "nvme_iov_md": false 00:04:39.392 }, 00:04:39.392 "memory_domains": [ 00:04:39.392 { 00:04:39.392 "dma_device_id": "system", 00:04:39.392 "dma_device_type": 1 00:04:39.392 }, 00:04:39.392 { 00:04:39.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.392 "dma_device_type": 2 00:04:39.392 } 00:04:39.392 ], 00:04:39.392 "driver_specific": { 00:04:39.392 "passthru": { 00:04:39.392 "name": "Passthru0", 00:04:39.392 "base_bdev_name": "Malloc2" 00:04:39.392 } 00:04:39.392 } 00:04:39.392 } 00:04:39.392 ]' 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.392 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.651 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.651 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:39.651 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.651 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.651 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.651 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.651 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.651 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.651 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.652 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.652 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:39.652 21:34:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.652 00:04:39.652 real 0m0.268s 00:04:39.652 user 0m0.163s 00:04:39.652 sys 0m0.055s 00:04:39.652 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.652 21:34:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.652 ************************************ 00:04:39.652 END TEST rpc_daemon_integrity 00:04:39.652 ************************************ 00:04:39.652 21:34:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:39.652 21:34:11 rpc -- rpc/rpc.sh@84 -- # killprocess 2838488 00:04:39.652 21:34:11 rpc -- common/autotest_common.sh@950 -- # '[' -z 2838488 ']' 00:04:39.652 21:34:11 rpc -- common/autotest_common.sh@954 -- # kill -0 2838488 00:04:39.652 21:34:11 rpc -- common/autotest_common.sh@955 -- # uname 00:04:39.652 21:34:11 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.652 21:34:11 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2838488 00:04:39.652 21:34:11 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.652 21:34:11 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.652 21:34:11 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2838488' 00:04:39.652 killing process with pid 2838488 00:04:39.652 21:34:11 rpc -- common/autotest_common.sh@969 -- # kill 2838488 00:04:39.652 21:34:11 rpc -- common/autotest_common.sh@974 -- # wait 2838488 00:04:39.911 00:04:39.911 real 0m2.180s 00:04:39.911 user 0m2.711s 00:04:39.911 sys 0m0.849s 00:04:39.911 21:34:12 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.911 21:34:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.911 ************************************ 00:04:39.911 END TEST rpc 00:04:39.911 ************************************ 00:04:40.170 21:34:12 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:40.170 21:34:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.170 21:34:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.170 21:34:12 -- common/autotest_common.sh@10 -- # set +x 00:04:40.170 ************************************ 00:04:40.170 START TEST skip_rpc 00:04:40.170 ************************************ 00:04:40.170 21:34:12 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:40.170 * Looking for test storage... 00:04:40.170 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:40.170 21:34:12 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:40.170 21:34:12 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:40.170 21:34:12 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:40.170 21:34:12 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.170 21:34:12 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.170 21:34:12 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.170 21:34:12 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:40.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.170 --rc genhtml_branch_coverage=1 00:04:40.170 --rc genhtml_function_coverage=1 00:04:40.170 --rc genhtml_legend=1 00:04:40.170 --rc geninfo_all_blocks=1 00:04:40.170 --rc geninfo_unexecuted_blocks=1 00:04:40.170 00:04:40.170 ' 00:04:40.170 21:34:12 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:40.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.170 --rc genhtml_branch_coverage=1 00:04:40.170 --rc genhtml_function_coverage=1 00:04:40.170 --rc genhtml_legend=1 00:04:40.170 --rc geninfo_all_blocks=1 00:04:40.170 --rc geninfo_unexecuted_blocks=1 00:04:40.170 00:04:40.170 ' 00:04:40.170 21:34:12 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:40.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.170 --rc genhtml_branch_coverage=1 00:04:40.170 --rc genhtml_function_coverage=1 00:04:40.170 --rc genhtml_legend=1 00:04:40.170 --rc geninfo_all_blocks=1 00:04:40.170 --rc geninfo_unexecuted_blocks=1 00:04:40.170 00:04:40.170 ' 00:04:40.170 21:34:12 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:40.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.170 --rc genhtml_branch_coverage=1 00:04:40.170 --rc genhtml_function_coverage=1 00:04:40.170 --rc genhtml_legend=1 00:04:40.170 --rc geninfo_all_blocks=1 00:04:40.170 --rc geninfo_unexecuted_blocks=1 00:04:40.170 00:04:40.170 ' 00:04:40.170 21:34:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:40.170 21:34:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:40.170 21:34:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:40.170 21:34:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.170 21:34:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.170 21:34:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.430 ************************************ 00:04:40.430 START TEST skip_rpc 00:04:40.430 ************************************ 00:04:40.430 21:34:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:40.430 21:34:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:40.430 21:34:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2839084 00:04:40.430 21:34:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.430 21:34:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:40.430 [2024-11-29 21:34:12.473821] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:04:40.430 [2024-11-29 21:34:12.473863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839084 ] 00:04:40.430 [2024-11-29 21:34:12.542015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.430 [2024-11-29 21:34:12.580251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2839084 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2839084 ']' 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2839084 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2839084 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2839084' 00:04:45.704 killing process with pid 2839084 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2839084 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2839084 00:04:45.704 00:04:45.704 real 0m5.398s 00:04:45.704 user 0m5.156s 00:04:45.704 sys 0m0.297s 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.704 21:34:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.704 ************************************ 00:04:45.704 END TEST skip_rpc 00:04:45.704 ************************************ 00:04:45.704 21:34:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:45.704 21:34:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.704 21:34:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.704 21:34:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.704 ************************************ 00:04:45.704 START TEST skip_rpc_with_json 00:04:45.704 ************************************ 00:04:45.704 21:34:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:45.704 21:34:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:45.704 21:34:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2840083 00:04:45.704 21:34:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.704 21:34:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.704 21:34:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2840083 00:04:45.704 21:34:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2840083 ']' 00:04:45.704 21:34:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.704 21:34:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.704 21:34:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.704 21:34:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.704 21:34:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.963 [2024-11-29 21:34:17.965631] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:04:45.963 [2024-11-29 21:34:17.965688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840083 ] 00:04:45.963 [2024-11-29 21:34:18.036947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.963 [2024-11-29 21:34:18.076451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.223 [2024-11-29 21:34:18.272966] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:46.223 request: 00:04:46.223 { 00:04:46.223 "trtype": "tcp", 00:04:46.223 "method": "nvmf_get_transports", 00:04:46.223 "req_id": 1 00:04:46.223 } 00:04:46.223 Got JSON-RPC error response 00:04:46.223 response: 00:04:46.223 { 00:04:46.223 "code": -19, 00:04:46.223 "message": "No such device" 00:04:46.223 } 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.223 [2024-11-29 21:34:18.285069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.223 21:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:46.223 { 00:04:46.223 "subsystems": [ 00:04:46.223 { 00:04:46.223 "subsystem": "fsdev", 00:04:46.223 "config": [ 00:04:46.223 { 00:04:46.223 "method": "fsdev_set_opts", 00:04:46.223 "params": { 00:04:46.223 "fsdev_io_pool_size": 65535, 00:04:46.223 "fsdev_io_cache_size": 256 00:04:46.223 } 00:04:46.223 } 00:04:46.223 ] 00:04:46.223 }, 00:04:46.223 { 00:04:46.223 "subsystem": "keyring", 00:04:46.223 "config": [] 00:04:46.223 }, 00:04:46.223 { 00:04:46.223 "subsystem": "iobuf", 00:04:46.223 "config": [ 00:04:46.223 { 00:04:46.223 "method": "iobuf_set_options", 00:04:46.223 "params": { 00:04:46.223 "small_pool_count": 8192, 00:04:46.223 "large_pool_count": 1024, 00:04:46.223 "small_bufsize": 8192, 00:04:46.223 "large_bufsize": 135168 00:04:46.223 } 00:04:46.223 } 00:04:46.223 ] 00:04:46.223 }, 00:04:46.223 { 00:04:46.223 "subsystem": "sock", 00:04:46.223 "config": [ 00:04:46.224 { 00:04:46.224 "method": "sock_set_default_impl", 00:04:46.224 "params": { 00:04:46.224 "impl_name": "posix" 00:04:46.224 } 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "method": "sock_impl_set_options", 00:04:46.224 "params": { 00:04:46.224 "impl_name": "ssl", 00:04:46.224 "recv_buf_size": 4096, 00:04:46.224 "send_buf_size": 4096, 00:04:46.224 "enable_recv_pipe": true, 00:04:46.224 "enable_quickack": false, 00:04:46.224 "enable_placement_id": 0, 00:04:46.224 "enable_zerocopy_send_server": true, 00:04:46.224 "enable_zerocopy_send_client": false, 00:04:46.224 "zerocopy_threshold": 0, 00:04:46.224 "tls_version": 0, 00:04:46.224 "enable_ktls": false 00:04:46.224 } 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "method": "sock_impl_set_options", 00:04:46.224 "params": { 00:04:46.224 "impl_name": "posix", 00:04:46.224 "recv_buf_size": 2097152, 00:04:46.224 "send_buf_size": 2097152, 00:04:46.224 "enable_recv_pipe": true, 00:04:46.224 "enable_quickack": false, 00:04:46.224 "enable_placement_id": 0, 00:04:46.224 "enable_zerocopy_send_server": true, 00:04:46.224 "enable_zerocopy_send_client": false, 00:04:46.224 "zerocopy_threshold": 0, 00:04:46.224 "tls_version": 0, 00:04:46.224 "enable_ktls": false 00:04:46.224 } 00:04:46.224 } 00:04:46.224 ] 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "subsystem": "vmd", 00:04:46.224 "config": [] 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "subsystem": "accel", 00:04:46.224 "config": [ 00:04:46.224 { 00:04:46.224 "method": "accel_set_options", 00:04:46.224 "params": { 00:04:46.224 "small_cache_size": 128, 00:04:46.224 "large_cache_size": 16, 00:04:46.224 "task_count": 2048, 00:04:46.224 "sequence_count": 2048, 00:04:46.224 "buf_count": 2048 00:04:46.224 } 00:04:46.224 } 00:04:46.224 ] 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "subsystem": "bdev", 00:04:46.224 "config": [ 00:04:46.224 { 00:04:46.224 "method": "bdev_set_options", 00:04:46.224 "params": { 00:04:46.224 "bdev_io_pool_size": 65535, 00:04:46.224 "bdev_io_cache_size": 256, 00:04:46.224 "bdev_auto_examine": true, 00:04:46.224 "iobuf_small_cache_size": 128, 00:04:46.224 "iobuf_large_cache_size": 16 00:04:46.224 } 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "method": "bdev_raid_set_options", 00:04:46.224 "params": { 00:04:46.224 "process_window_size_kb": 1024, 00:04:46.224 "process_max_bandwidth_mb_sec": 0 00:04:46.224 } 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "method": "bdev_iscsi_set_options", 00:04:46.224 "params": { 00:04:46.224 "timeout_sec": 30 00:04:46.224 } 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "method": "bdev_nvme_set_options", 00:04:46.224 "params": { 00:04:46.224 "action_on_timeout": "none", 00:04:46.224 "timeout_us": 0, 00:04:46.224 "timeout_admin_us": 0, 00:04:46.224 "keep_alive_timeout_ms": 10000, 00:04:46.224 "arbitration_burst": 0, 00:04:46.224 "low_priority_weight": 0, 00:04:46.224 "medium_priority_weight": 0, 00:04:46.224 "high_priority_weight": 0, 00:04:46.224 "nvme_adminq_poll_period_us": 10000, 00:04:46.224 "nvme_ioq_poll_period_us": 0, 00:04:46.224 "io_queue_requests": 0, 00:04:46.224 "delay_cmd_submit": true, 00:04:46.224 "transport_retry_count": 4, 00:04:46.224 "bdev_retry_count": 3, 00:04:46.224 "transport_ack_timeout": 0, 00:04:46.224 "ctrlr_loss_timeout_sec": 0, 00:04:46.224 "reconnect_delay_sec": 0, 00:04:46.224 "fast_io_fail_timeout_sec": 0, 00:04:46.224 "disable_auto_failback": false, 00:04:46.224 "generate_uuids": false, 00:04:46.224 "transport_tos": 0, 00:04:46.224 "nvme_error_stat": false, 00:04:46.224 "rdma_srq_size": 0, 00:04:46.224 "io_path_stat": false, 00:04:46.224 "allow_accel_sequence": false, 00:04:46.224 "rdma_max_cq_size": 0, 00:04:46.224 "rdma_cm_event_timeout_ms": 0, 00:04:46.224 "dhchap_digests": [ 00:04:46.224 "sha256", 00:04:46.224 "sha384", 00:04:46.224 "sha512" 00:04:46.224 ], 00:04:46.224 "dhchap_dhgroups": [ 00:04:46.224 "null", 00:04:46.224 "ffdhe2048", 00:04:46.224 "ffdhe3072", 00:04:46.224 "ffdhe4096", 00:04:46.224 "ffdhe6144", 00:04:46.224 "ffdhe8192" 00:04:46.224 ] 00:04:46.224 } 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "method": "bdev_nvme_set_hotplug", 00:04:46.224 "params": { 00:04:46.224 "period_us": 100000, 00:04:46.224 "enable": false 00:04:46.224 } 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "method": "bdev_wait_for_examine" 00:04:46.224 } 00:04:46.224 ] 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "subsystem": "scsi", 00:04:46.224 "config": null 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "subsystem": "scheduler", 00:04:46.224 "config": [ 00:04:46.224 { 00:04:46.224 "method": "framework_set_scheduler", 00:04:46.224 "params": { 00:04:46.224 "name": "static" 00:04:46.224 } 00:04:46.224 } 00:04:46.224 ] 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "subsystem": "vhost_scsi", 00:04:46.224 "config": [] 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "subsystem": "vhost_blk", 00:04:46.224 "config": [] 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "subsystem": "ublk", 00:04:46.224 "config": [] 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "subsystem": "nbd", 00:04:46.224 "config": [] 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "subsystem": "nvmf", 00:04:46.224 "config": [ 00:04:46.224 { 00:04:46.224 "method": "nvmf_set_config", 00:04:46.224 "params": { 00:04:46.224 "discovery_filter": "match_any", 00:04:46.224 "admin_cmd_passthru": { 00:04:46.224 "identify_ctrlr": false 00:04:46.224 }, 00:04:46.224 "dhchap_digests": [ 00:04:46.224 "sha256", 00:04:46.224 "sha384", 00:04:46.224 "sha512" 00:04:46.224 ], 00:04:46.224 "dhchap_dhgroups": [ 00:04:46.224 "null", 00:04:46.224 "ffdhe2048", 00:04:46.224 "ffdhe3072", 00:04:46.224 "ffdhe4096", 00:04:46.224 "ffdhe6144", 00:04:46.224 "ffdhe8192" 00:04:46.224 ] 00:04:46.224 } 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "method": "nvmf_set_max_subsystems", 00:04:46.224 "params": { 00:04:46.224 "max_subsystems": 1024 00:04:46.224 } 00:04:46.224 }, 00:04:46.224 { 00:04:46.224 "method": "nvmf_set_crdt", 00:04:46.224 "params": { 00:04:46.224 "crdt1": 0, 00:04:46.225 "crdt2": 0, 00:04:46.225 "crdt3": 0 00:04:46.225 } 00:04:46.225 }, 00:04:46.225 { 00:04:46.225 "method": "nvmf_create_transport", 00:04:46.225 "params": { 00:04:46.225 "trtype": "TCP", 00:04:46.225 "max_queue_depth": 128, 00:04:46.225 "max_io_qpairs_per_ctrlr": 127, 00:04:46.225 "in_capsule_data_size": 4096, 00:04:46.225 "max_io_size": 131072, 00:04:46.225 "io_unit_size": 131072, 00:04:46.225 "max_aq_depth": 128, 00:04:46.225 "num_shared_buffers": 511, 00:04:46.225 "buf_cache_size": 4294967295, 00:04:46.225 "dif_insert_or_strip": false, 00:04:46.225 "zcopy": false, 00:04:46.225 "c2h_success": true, 00:04:46.225 "sock_priority": 0, 00:04:46.225 "abort_timeout_sec": 1, 00:04:46.225 "ack_timeout": 0, 00:04:46.225 "data_wr_pool_size": 0 00:04:46.225 } 00:04:46.225 } 00:04:46.225 ] 00:04:46.225 }, 00:04:46.225 { 00:04:46.225 "subsystem": "iscsi", 00:04:46.225 "config": [ 00:04:46.225 { 00:04:46.225 "method": "iscsi_set_options", 00:04:46.225 "params": { 00:04:46.225 "node_base": "iqn.2016-06.io.spdk", 00:04:46.225 "max_sessions": 128, 00:04:46.225 "max_connections_per_session": 2, 00:04:46.225 "max_queue_depth": 64, 00:04:46.225 "default_time2wait": 2, 00:04:46.225 "default_time2retain": 20, 00:04:46.225 "first_burst_length": 8192, 00:04:46.225 "immediate_data": true, 00:04:46.225 "allow_duplicated_isid": false, 00:04:46.225 "error_recovery_level": 0, 00:04:46.225 "nop_timeout": 60, 00:04:46.225 "nop_in_interval": 30, 00:04:46.225 "disable_chap": false, 00:04:46.225 "require_chap": false, 00:04:46.225 "mutual_chap": false, 00:04:46.225 "chap_group": 0, 00:04:46.225 "max_large_datain_per_connection": 64, 00:04:46.225 "max_r2t_per_connection": 4, 00:04:46.225 "pdu_pool_size": 36864, 00:04:46.225 "immediate_data_pool_size": 16384, 00:04:46.225 "data_out_pool_size": 2048 00:04:46.225 } 00:04:46.225 } 00:04:46.225 ] 00:04:46.225 } 00:04:46.225 ] 00:04:46.225 } 00:04:46.225 21:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:46.225 21:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2840083 00:04:46.225 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2840083 ']' 00:04:46.225 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2840083 00:04:46.225 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:46.225 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.484 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2840083 00:04:46.484 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.484 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.484 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2840083' 00:04:46.484 killing process with pid 2840083 00:04:46.484 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2840083 00:04:46.484 21:34:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2840083 00:04:46.744 21:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2840193 00:04:46.744 21:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:46.744 21:34:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:52.021 21:34:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2840193 00:04:52.021 21:34:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2840193 ']' 00:04:52.021 21:34:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2840193 00:04:52.021 21:34:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:52.021 21:34:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.021 21:34:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2840193 00:04:52.021 21:34:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.021 21:34:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.021 21:34:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2840193' 00:04:52.021 killing process with pid 2840193 00:04:52.021 21:34:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2840193 00:04:52.021 21:34:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2840193 00:04:52.021 21:34:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:52.021 21:34:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:52.021 00:04:52.021 real 0m6.321s 00:04:52.021 user 0m5.989s 00:04:52.021 sys 0m0.662s 00:04:52.021 21:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.021 21:34:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.021 ************************************ 00:04:52.021 END TEST skip_rpc_with_json 00:04:52.021 ************************************ 00:04:52.281 21:34:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:52.281 21:34:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.281 21:34:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.281 21:34:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.281 ************************************ 00:04:52.281 START TEST skip_rpc_with_delay 00:04:52.281 ************************************ 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.281 [2024-11-29 21:34:24.369657] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:52.281 [2024-11-29 21:34:24.369734] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:52.281 00:04:52.281 real 0m0.070s 00:04:52.281 user 0m0.034s 00:04:52.281 sys 0m0.036s 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.281 21:34:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:52.281 ************************************ 00:04:52.281 END TEST skip_rpc_with_delay 00:04:52.281 ************************************ 00:04:52.281 21:34:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:52.281 21:34:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:52.281 21:34:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:52.281 21:34:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.281 21:34:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.281 21:34:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.281 ************************************ 00:04:52.281 START TEST exit_on_failed_rpc_init 00:04:52.281 ************************************ 00:04:52.281 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:52.281 21:34:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2841308 00:04:52.281 21:34:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2841308 00:04:52.281 21:34:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.281 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2841308 ']' 00:04:52.281 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.281 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.281 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.281 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.281 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.281 [2024-11-29 21:34:24.522114] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:04:52.281 [2024-11-29 21:34:24.522165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841308 ] 00:04:52.541 [2024-11-29 21:34:24.592101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.541 [2024-11-29 21:34:24.631491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:52.801 21:34:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.801 [2024-11-29 21:34:24.884008] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:04:52.801 [2024-11-29 21:34:24.884062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841314 ] 00:04:52.801 [2024-11-29 21:34:24.952766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.801 [2024-11-29 21:34:24.991095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.801 [2024-11-29 21:34:24.991162] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:52.801 [2024-11-29 21:34:24.991173] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:52.801 [2024-11-29 21:34:24.991181] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2841308 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2841308 ']' 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2841308 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2841308 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2841308' 00:04:53.059 killing process with pid 2841308 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2841308 00:04:53.059 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2841308 00:04:53.316 00:04:53.316 real 0m0.969s 00:04:53.316 user 0m1.008s 00:04:53.316 sys 0m0.437s 00:04:53.316 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.317 21:34:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.317 ************************************ 00:04:53.317 END TEST exit_on_failed_rpc_init 00:04:53.317 ************************************ 00:04:53.317 21:34:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:53.317 00:04:53.317 real 0m13.276s 00:04:53.317 user 0m12.393s 00:04:53.317 sys 0m1.789s 00:04:53.317 21:34:25 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.317 21:34:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.317 ************************************ 00:04:53.317 END TEST skip_rpc 00:04:53.317 ************************************ 00:04:53.317 21:34:25 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.317 21:34:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.317 21:34:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.317 21:34:25 -- common/autotest_common.sh@10 -- # set +x 00:04:53.317 ************************************ 00:04:53.317 START TEST rpc_client 00:04:53.317 ************************************ 00:04:53.317 21:34:25 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.574 * Looking for test storage... 00:04:53.574 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:53.574 21:34:25 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:53.574 21:34:25 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:53.575 21:34:25 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:53.575 21:34:25 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.575 21:34:25 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:53.575 21:34:25 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.575 21:34:25 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:53.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.575 --rc genhtml_branch_coverage=1 00:04:53.575 --rc genhtml_function_coverage=1 00:04:53.575 --rc genhtml_legend=1 00:04:53.575 --rc geninfo_all_blocks=1 00:04:53.575 --rc geninfo_unexecuted_blocks=1 00:04:53.575 00:04:53.575 ' 00:04:53.575 21:34:25 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:53.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.575 --rc genhtml_branch_coverage=1 00:04:53.575 --rc genhtml_function_coverage=1 00:04:53.575 --rc genhtml_legend=1 00:04:53.575 --rc geninfo_all_blocks=1 00:04:53.575 --rc geninfo_unexecuted_blocks=1 00:04:53.575 00:04:53.575 ' 00:04:53.575 21:34:25 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:53.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.575 --rc genhtml_branch_coverage=1 00:04:53.575 --rc genhtml_function_coverage=1 00:04:53.575 --rc genhtml_legend=1 00:04:53.575 --rc geninfo_all_blocks=1 00:04:53.575 --rc geninfo_unexecuted_blocks=1 00:04:53.575 00:04:53.575 ' 00:04:53.575 21:34:25 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:53.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.575 --rc genhtml_branch_coverage=1 00:04:53.575 --rc genhtml_function_coverage=1 00:04:53.575 --rc genhtml_legend=1 00:04:53.575 --rc geninfo_all_blocks=1 00:04:53.575 --rc geninfo_unexecuted_blocks=1 00:04:53.575 00:04:53.575 ' 00:04:53.575 21:34:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:53.575 OK 00:04:53.575 21:34:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:53.575 00:04:53.575 real 0m0.194s 00:04:53.575 user 0m0.114s 00:04:53.575 sys 0m0.097s 00:04:53.575 21:34:25 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.575 21:34:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:53.575 ************************************ 00:04:53.575 END TEST rpc_client 00:04:53.575 ************************************ 00:04:53.575 21:34:25 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:53.575 21:34:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.575 21:34:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.575 21:34:25 -- common/autotest_common.sh@10 -- # set +x 00:04:53.835 ************************************ 00:04:53.835 START TEST json_config 00:04:53.835 ************************************ 00:04:53.835 21:34:25 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:53.835 21:34:25 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:53.835 21:34:25 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:53.835 21:34:25 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:53.835 21:34:25 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:53.835 21:34:25 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.835 21:34:25 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.835 21:34:25 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.835 21:34:25 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.835 21:34:25 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.835 21:34:25 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.835 21:34:25 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.835 21:34:25 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.835 21:34:25 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.835 21:34:25 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.835 21:34:25 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.835 21:34:25 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:53.835 21:34:25 json_config -- scripts/common.sh@345 -- # : 1 00:04:53.835 21:34:25 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.835 21:34:25 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.835 21:34:25 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:53.835 21:34:25 json_config -- scripts/common.sh@353 -- # local d=1 00:04:53.835 21:34:25 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.835 21:34:25 json_config -- scripts/common.sh@355 -- # echo 1 00:04:53.835 21:34:25 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.835 21:34:25 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:53.835 21:34:25 json_config -- scripts/common.sh@353 -- # local d=2 00:04:53.835 21:34:25 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.835 21:34:25 json_config -- scripts/common.sh@355 -- # echo 2 00:04:53.835 21:34:25 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.835 21:34:25 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.835 21:34:25 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.835 21:34:25 json_config -- scripts/common.sh@368 -- # return 0 00:04:53.835 21:34:25 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.835 21:34:25 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.835 --rc genhtml_branch_coverage=1 00:04:53.835 --rc genhtml_function_coverage=1 00:04:53.835 --rc genhtml_legend=1 00:04:53.835 --rc geninfo_all_blocks=1 00:04:53.835 --rc geninfo_unexecuted_blocks=1 00:04:53.835 00:04:53.835 ' 00:04:53.835 21:34:25 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.835 --rc genhtml_branch_coverage=1 00:04:53.835 --rc genhtml_function_coverage=1 00:04:53.835 --rc genhtml_legend=1 00:04:53.835 --rc geninfo_all_blocks=1 00:04:53.835 --rc geninfo_unexecuted_blocks=1 00:04:53.835 00:04:53.835 ' 00:04:53.835 21:34:25 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.835 --rc genhtml_branch_coverage=1 00:04:53.835 --rc genhtml_function_coverage=1 00:04:53.835 --rc genhtml_legend=1 00:04:53.835 --rc geninfo_all_blocks=1 00:04:53.835 --rc geninfo_unexecuted_blocks=1 00:04:53.835 00:04:53.835 ' 00:04:53.835 21:34:25 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:53.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.835 --rc genhtml_branch_coverage=1 00:04:53.835 --rc genhtml_function_coverage=1 00:04:53.835 --rc genhtml_legend=1 00:04:53.835 --rc geninfo_all_blocks=1 00:04:53.835 --rc geninfo_unexecuted_blocks=1 00:04:53.835 00:04:53.835 ' 00:04:53.835 21:34:25 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.835 21:34:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:53.835 21:34:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.835 21:34:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.835 21:34:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.835 21:34:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.835 21:34:25 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.835 21:34:25 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.835 21:34:25 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.836 21:34:25 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.836 21:34:25 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.836 21:34:25 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:53.836 21:34:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.836 21:34:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.836 21:34:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.836 21:34:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.836 21:34:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.836 21:34:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.836 21:34:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.836 21:34:26 json_config -- paths/export.sh@5 -- # export PATH 00:04:53.836 21:34:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@51 -- # : 0 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.836 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.836 21:34:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:53.836 INFO: JSON configuration test init 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:53.836 21:34:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.836 21:34:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:53.836 21:34:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.836 21:34:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.836 21:34:26 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:53.836 21:34:26 json_config -- json_config/common.sh@9 -- # local app=target 00:04:53.836 21:34:26 json_config -- json_config/common.sh@10 -- # shift 00:04:53.836 21:34:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.836 21:34:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.836 21:34:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.836 21:34:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.836 21:34:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.836 21:34:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2841702 00:04:53.836 21:34:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.836 Waiting for target to run... 00:04:53.836 21:34:26 json_config -- json_config/common.sh@25 -- # waitforlisten 2841702 /var/tmp/spdk_tgt.sock 00:04:53.836 21:34:26 json_config -- common/autotest_common.sh@831 -- # '[' -z 2841702 ']' 00:04:53.836 21:34:26 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:53.836 21:34:26 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.836 21:34:26 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.836 21:34:26 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.836 21:34:26 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.836 21:34:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.095 [2024-11-29 21:34:26.088573] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:04:54.095 [2024-11-29 21:34:26.088628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841702 ] 00:04:54.354 [2024-11-29 21:34:26.532624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.354 [2024-11-29 21:34:26.561831] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.923 21:34:26 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.923 21:34:26 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:54.923 21:34:26 json_config -- json_config/common.sh@26 -- # echo '' 00:04:54.923 00:04:54.923 21:34:26 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:54.923 21:34:26 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:54.923 21:34:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.923 21:34:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.923 21:34:26 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:54.923 21:34:26 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:54.923 21:34:26 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:54.923 21:34:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.923 21:34:26 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:54.923 21:34:26 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:54.923 21:34:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:58.211 21:34:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.211 21:34:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:58.211 21:34:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@54 -- # sort 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:58.211 21:34:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.211 21:34:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:58.211 21:34:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.211 21:34:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:04:58.211 21:34:30 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:04:58.211 21:34:30 json_config -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:04:58.211 21:34:30 json_config -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:58.211 21:34:30 json_config -- nvmf/common.sh@472 -- # prepare_net_devs 00:04:58.211 21:34:30 json_config -- nvmf/common.sh@434 -- # local -g is_hw=no 00:04:58.211 21:34:30 json_config -- nvmf/common.sh@436 -- # remove_spdk_ns 00:04:58.211 21:34:30 json_config -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:58.211 21:34:30 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:58.211 21:34:30 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:58.211 21:34:30 json_config -- nvmf/common.sh@438 -- # [[ phy-fallback != virt ]] 00:04:58.211 21:34:30 json_config -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:04:58.211 21:34:30 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:04:58.211 21:34:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@320 -- # e810=() 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@321 -- # x722=() 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@322 -- # mlx=() 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:06.333 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:06.333 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:06.333 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:06.333 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@438 -- # is_hw=yes 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@444 -- # rdma_device_init 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@62 -- # uname 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@526 -- # allocate_nic_ips 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:06.333 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:06.333 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:06.333 altname enp217s0f0np0 00:05:06.333 altname ens818f0np0 00:05:06.333 inet 192.168.100.8/24 scope global mlx_0_0 00:05:06.333 valid_lft forever preferred_lft forever 00:05:06.333 21:34:37 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:06.334 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:06.334 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:06.334 altname enp217s0f1np1 00:05:06.334 altname ens818f1np1 00:05:06.334 inet 192.168.100.9/24 scope global mlx_0_1 00:05:06.334 valid_lft forever preferred_lft forever 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@446 -- # return 0 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:05:06.334 192.168.100.9' 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:05:06.334 192.168.100.9' 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@481 -- # head -n 1 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:05:06.334 192.168.100.9' 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@482 -- # tail -n +2 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@482 -- # head -n 1 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:05:06.334 21:34:37 json_config -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:05:06.334 21:34:37 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:05:06.334 21:34:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:06.334 21:34:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:06.334 MallocForNvmf0 00:05:06.334 21:34:37 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:06.334 21:34:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:06.334 MallocForNvmf1 00:05:06.334 21:34:37 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:06.334 21:34:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:06.334 [2024-11-29 21:34:37.892806] rdma.c:2737:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:06.334 [2024-11-29 21:34:37.922959] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d5d170/0x1c39a00) succeed. 00:05:06.334 [2024-11-29 21:34:37.935801] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d60370/0x1c7b0a0) succeed. 00:05:06.334 21:34:37 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:06.334 21:34:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:06.334 21:34:38 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:06.334 21:34:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:06.334 21:34:38 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.334 21:34:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.334 21:34:38 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:06.334 21:34:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:06.593 [2024-11-29 21:34:38.676480] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:06.593 21:34:38 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:06.593 21:34:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.593 21:34:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.593 21:34:38 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:06.593 21:34:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.593 21:34:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.593 21:34:38 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:06.593 21:34:38 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.593 21:34:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.853 MallocBdevForConfigChangeCheck 00:05:06.853 21:34:38 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:06.853 21:34:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.853 21:34:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.853 21:34:39 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:06.853 21:34:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.112 21:34:39 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:07.112 INFO: shutting down applications... 00:05:07.112 21:34:39 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:07.112 21:34:39 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:07.112 21:34:39 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:07.112 21:34:39 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:09.647 Calling clear_iscsi_subsystem 00:05:09.647 Calling clear_nvmf_subsystem 00:05:09.647 Calling clear_nbd_subsystem 00:05:09.647 Calling clear_ublk_subsystem 00:05:09.647 Calling clear_vhost_blk_subsystem 00:05:09.647 Calling clear_vhost_scsi_subsystem 00:05:09.647 Calling clear_bdev_subsystem 00:05:09.905 21:34:41 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:09.905 21:34:41 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:09.905 21:34:41 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:09.905 21:34:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.905 21:34:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:09.905 21:34:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:10.164 21:34:42 json_config -- json_config/json_config.sh@352 -- # break 00:05:10.164 21:34:42 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:10.164 21:34:42 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:10.164 21:34:42 json_config -- json_config/common.sh@31 -- # local app=target 00:05:10.164 21:34:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:10.164 21:34:42 json_config -- json_config/common.sh@35 -- # [[ -n 2841702 ]] 00:05:10.164 21:34:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2841702 00:05:10.164 21:34:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:10.164 21:34:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.164 21:34:42 json_config -- json_config/common.sh@41 -- # kill -0 2841702 00:05:10.164 21:34:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.733 21:34:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.733 21:34:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.733 21:34:42 json_config -- json_config/common.sh@41 -- # kill -0 2841702 00:05:10.733 21:34:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:10.733 21:34:42 json_config -- json_config/common.sh@43 -- # break 00:05:10.733 21:34:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:10.733 21:34:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:10.733 SPDK target shutdown done 00:05:10.733 21:34:42 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:10.733 INFO: relaunching applications... 00:05:10.733 21:34:42 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.733 21:34:42 json_config -- json_config/common.sh@9 -- # local app=target 00:05:10.733 21:34:42 json_config -- json_config/common.sh@10 -- # shift 00:05:10.733 21:34:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.733 21:34:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.733 21:34:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.733 21:34:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.733 21:34:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.733 21:34:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2846799 00:05:10.733 21:34:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.733 Waiting for target to run... 00:05:10.733 21:34:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.733 21:34:42 json_config -- json_config/common.sh@25 -- # waitforlisten 2846799 /var/tmp/spdk_tgt.sock 00:05:10.733 21:34:42 json_config -- common/autotest_common.sh@831 -- # '[' -z 2846799 ']' 00:05:10.733 21:34:42 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.733 21:34:42 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.733 21:34:42 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.733 21:34:42 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.733 21:34:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.733 [2024-11-29 21:34:42.796698] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:10.733 [2024-11-29 21:34:42.796768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846799 ] 00:05:11.301 [2024-11-29 21:34:43.255128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.301 [2024-11-29 21:34:43.286268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.589 [2024-11-29 21:34:46.334408] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x207be60/0x2086280) succeed. 00:05:14.589 [2024-11-29 21:34:46.345225] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x207f060/0x20c7920) succeed. 00:05:14.589 [2024-11-29 21:34:46.395970] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:14.850 21:34:47 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.850 21:34:47 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:14.850 21:34:47 json_config -- json_config/common.sh@26 -- # echo '' 00:05:14.850 00:05:14.850 21:34:47 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:14.850 21:34:47 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:14.850 INFO: Checking if target configuration is the same... 00:05:14.850 21:34:47 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:14.850 21:34:47 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.850 21:34:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.850 + '[' 2 -ne 2 ']' 00:05:14.850 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.850 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:14.850 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:14.850 +++ basename /dev/fd/62 00:05:14.850 ++ mktemp /tmp/62.XXX 00:05:14.850 + tmp_file_1=/tmp/62.YEL 00:05:14.850 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.850 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.850 + tmp_file_2=/tmp/spdk_tgt_config.json.Ak0 00:05:14.850 + ret=0 00:05:14.850 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.416 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.416 + diff -u /tmp/62.YEL /tmp/spdk_tgt_config.json.Ak0 00:05:15.416 + echo 'INFO: JSON config files are the same' 00:05:15.416 INFO: JSON config files are the same 00:05:15.416 + rm /tmp/62.YEL /tmp/spdk_tgt_config.json.Ak0 00:05:15.416 + exit 0 00:05:15.417 21:34:47 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:15.417 21:34:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:15.417 INFO: changing configuration and checking if this can be detected... 00:05:15.417 21:34:47 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:15.417 21:34:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:15.417 21:34:47 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.417 21:34:47 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:15.417 21:34:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.417 + '[' 2 -ne 2 ']' 00:05:15.417 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:15.417 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:15.417 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:15.417 +++ basename /dev/fd/62 00:05:15.417 ++ mktemp /tmp/62.XXX 00:05:15.417 + tmp_file_1=/tmp/62.8Oz 00:05:15.417 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.417 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:15.417 + tmp_file_2=/tmp/spdk_tgt_config.json.Rt1 00:05:15.417 + ret=0 00:05:15.417 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.675 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.933 + diff -u /tmp/62.8Oz /tmp/spdk_tgt_config.json.Rt1 00:05:15.933 + ret=1 00:05:15.933 + echo '=== Start of file: /tmp/62.8Oz ===' 00:05:15.933 + cat /tmp/62.8Oz 00:05:15.933 + echo '=== End of file: /tmp/62.8Oz ===' 00:05:15.933 + echo '' 00:05:15.933 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Rt1 ===' 00:05:15.933 + cat /tmp/spdk_tgt_config.json.Rt1 00:05:15.933 + echo '=== End of file: /tmp/spdk_tgt_config.json.Rt1 ===' 00:05:15.933 + echo '' 00:05:15.933 + rm /tmp/62.8Oz /tmp/spdk_tgt_config.json.Rt1 00:05:15.933 + exit 1 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:15.933 INFO: configuration change detected. 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:15.933 21:34:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.933 21:34:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@324 -- # [[ -n 2846799 ]] 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:15.933 21:34:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.933 21:34:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:15.933 21:34:47 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:15.933 21:34:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:15.933 21:34:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.933 21:34:48 json_config -- json_config/json_config.sh@330 -- # killprocess 2846799 00:05:15.933 21:34:48 json_config -- common/autotest_common.sh@950 -- # '[' -z 2846799 ']' 00:05:15.933 21:34:48 json_config -- common/autotest_common.sh@954 -- # kill -0 2846799 00:05:15.933 21:34:48 json_config -- common/autotest_common.sh@955 -- # uname 00:05:15.933 21:34:48 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.933 21:34:48 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2846799 00:05:15.933 21:34:48 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.933 21:34:48 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.933 21:34:48 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2846799' 00:05:15.933 killing process with pid 2846799 00:05:15.933 21:34:48 json_config -- common/autotest_common.sh@969 -- # kill 2846799 00:05:15.933 21:34:48 json_config -- common/autotest_common.sh@974 -- # wait 2846799 00:05:18.468 21:34:50 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.468 21:34:50 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:18.468 21:34:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.468 21:34:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.468 21:34:50 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:18.468 21:34:50 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:18.468 INFO: Success 00:05:18.468 21:34:50 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:18.468 21:34:50 json_config -- nvmf/common.sh@512 -- # nvmfcleanup 00:05:18.468 21:34:50 json_config -- nvmf/common.sh@121 -- # sync 00:05:18.468 21:34:50 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:05:18.468 21:34:50 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:05:18.468 21:34:50 json_config -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:05:18.468 21:34:50 json_config -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:05:18.468 21:34:50 json_config -- nvmf/common.sh@519 -- # [[ '' == \t\c\p ]] 00:05:18.468 00:05:18.468 real 0m24.845s 00:05:18.468 user 0m27.434s 00:05:18.468 sys 0m7.749s 00:05:18.468 21:34:50 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.468 21:34:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.468 ************************************ 00:05:18.468 END TEST json_config 00:05:18.468 ************************************ 00:05:18.729 21:34:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:18.729 21:34:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.729 21:34:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.729 21:34:50 -- common/autotest_common.sh@10 -- # set +x 00:05:18.729 ************************************ 00:05:18.729 START TEST json_config_extra_key 00:05:18.729 ************************************ 00:05:18.729 21:34:50 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:18.729 21:34:50 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.729 21:34:50 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.729 21:34:50 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:18.729 21:34:50 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:18.729 21:34:50 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.729 21:34:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:18.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.729 --rc genhtml_branch_coverage=1 00:05:18.729 --rc genhtml_function_coverage=1 00:05:18.729 --rc genhtml_legend=1 00:05:18.729 --rc geninfo_all_blocks=1 00:05:18.729 --rc geninfo_unexecuted_blocks=1 00:05:18.729 00:05:18.729 ' 00:05:18.729 21:34:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:18.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.729 --rc genhtml_branch_coverage=1 00:05:18.729 --rc genhtml_function_coverage=1 00:05:18.729 --rc genhtml_legend=1 00:05:18.729 --rc geninfo_all_blocks=1 00:05:18.729 --rc geninfo_unexecuted_blocks=1 00:05:18.729 00:05:18.729 ' 00:05:18.729 21:34:50 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:18.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.729 --rc genhtml_branch_coverage=1 00:05:18.729 --rc genhtml_function_coverage=1 00:05:18.729 --rc genhtml_legend=1 00:05:18.729 --rc geninfo_all_blocks=1 00:05:18.729 --rc geninfo_unexecuted_blocks=1 00:05:18.729 00:05:18.729 ' 00:05:18.729 21:34:50 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:18.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.729 --rc genhtml_branch_coverage=1 00:05:18.729 --rc genhtml_function_coverage=1 00:05:18.729 --rc genhtml_legend=1 00:05:18.729 --rc geninfo_all_blocks=1 00:05:18.729 --rc geninfo_unexecuted_blocks=1 00:05:18.729 00:05:18.729 ' 00:05:18.729 21:34:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.729 21:34:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.729 21:34:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.729 21:34:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.729 21:34:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.729 21:34:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:18.729 21:34:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.729 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.729 21:34:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.729 21:34:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:18.729 21:34:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:18.729 21:34:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:18.729 21:34:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:18.729 21:34:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:18.729 21:34:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:18.729 21:34:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:18.729 21:34:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:18.730 21:34:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:18.730 21:34:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:18.730 21:34:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:18.730 INFO: launching applications... 00:05:18.730 21:34:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:18.730 21:34:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:18.730 21:34:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:18.730 21:34:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.730 21:34:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.730 21:34:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.730 21:34:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.730 21:34:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.730 21:34:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2848270 00:05:18.730 21:34:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.730 Waiting for target to run... 00:05:18.730 21:34:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2848270 /var/tmp/spdk_tgt.sock 00:05:18.730 21:34:50 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2848270 ']' 00:05:18.730 21:34:50 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:18.730 21:34:50 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.730 21:34:50 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.730 21:34:50 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.730 21:34:50 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.730 21:34:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.989 [2024-11-29 21:34:51.005941] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:18.989 [2024-11-29 21:34:51.005992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848270 ] 00:05:19.249 [2024-11-29 21:34:51.303845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.249 [2024-11-29 21:34:51.325992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.818 21:34:51 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.818 21:34:51 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:19.818 21:34:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:19.818 00:05:19.818 21:34:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:19.818 INFO: shutting down applications... 00:05:19.818 21:34:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:19.818 21:34:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:19.818 21:34:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.818 21:34:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2848270 ]] 00:05:19.818 21:34:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2848270 00:05:19.818 21:34:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.818 21:34:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.818 21:34:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2848270 00:05:19.818 21:34:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.388 21:34:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.388 21:34:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.388 21:34:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2848270 00:05:20.388 21:34:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.388 21:34:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:20.388 21:34:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.388 21:34:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.388 SPDK target shutdown done 00:05:20.388 21:34:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:20.388 Success 00:05:20.388 00:05:20.388 real 0m1.581s 00:05:20.388 user 0m1.331s 00:05:20.388 sys 0m0.428s 00:05:20.388 21:34:52 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.388 21:34:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.388 ************************************ 00:05:20.388 END TEST json_config_extra_key 00:05:20.388 ************************************ 00:05:20.388 21:34:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.388 21:34:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.388 21:34:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.388 21:34:52 -- common/autotest_common.sh@10 -- # set +x 00:05:20.388 ************************************ 00:05:20.388 START TEST alias_rpc 00:05:20.388 ************************************ 00:05:20.388 21:34:52 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.388 * Looking for test storage... 00:05:20.388 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:20.388 21:34:52 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:20.388 21:34:52 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:20.388 21:34:52 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:20.388 21:34:52 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.388 21:34:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:20.389 21:34:52 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.389 21:34:52 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:20.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.389 --rc genhtml_branch_coverage=1 00:05:20.389 --rc genhtml_function_coverage=1 00:05:20.389 --rc genhtml_legend=1 00:05:20.389 --rc geninfo_all_blocks=1 00:05:20.389 --rc geninfo_unexecuted_blocks=1 00:05:20.389 00:05:20.389 ' 00:05:20.389 21:34:52 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:20.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.389 --rc genhtml_branch_coverage=1 00:05:20.389 --rc genhtml_function_coverage=1 00:05:20.389 --rc genhtml_legend=1 00:05:20.389 --rc geninfo_all_blocks=1 00:05:20.389 --rc geninfo_unexecuted_blocks=1 00:05:20.389 00:05:20.389 ' 00:05:20.389 21:34:52 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:20.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.389 --rc genhtml_branch_coverage=1 00:05:20.389 --rc genhtml_function_coverage=1 00:05:20.389 --rc genhtml_legend=1 00:05:20.389 --rc geninfo_all_blocks=1 00:05:20.389 --rc geninfo_unexecuted_blocks=1 00:05:20.389 00:05:20.389 ' 00:05:20.389 21:34:52 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:20.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.389 --rc genhtml_branch_coverage=1 00:05:20.389 --rc genhtml_function_coverage=1 00:05:20.389 --rc genhtml_legend=1 00:05:20.389 --rc geninfo_all_blocks=1 00:05:20.389 --rc geninfo_unexecuted_blocks=1 00:05:20.389 00:05:20.389 ' 00:05:20.389 21:34:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.389 21:34:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2848594 00:05:20.389 21:34:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.389 21:34:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2848594 00:05:20.389 21:34:52 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2848594 ']' 00:05:20.389 21:34:52 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.389 21:34:52 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.389 21:34:52 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.389 21:34:52 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.389 21:34:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.648 [2024-11-29 21:34:52.650549] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:20.649 [2024-11-29 21:34:52.650602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848594 ] 00:05:20.649 [2024-11-29 21:34:52.718312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.649 [2024-11-29 21:34:52.757207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.909 21:34:52 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.909 21:34:52 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:20.909 21:34:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:21.169 21:34:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2848594 00:05:21.169 21:34:53 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2848594 ']' 00:05:21.169 21:34:53 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2848594 00:05:21.169 21:34:53 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:21.169 21:34:53 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.169 21:34:53 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2848594 00:05:21.169 21:34:53 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:21.169 21:34:53 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:21.169 21:34:53 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2848594' 00:05:21.169 killing process with pid 2848594 00:05:21.169 21:34:53 alias_rpc -- common/autotest_common.sh@969 -- # kill 2848594 00:05:21.169 21:34:53 alias_rpc -- common/autotest_common.sh@974 -- # wait 2848594 00:05:21.429 00:05:21.429 real 0m1.137s 00:05:21.429 user 0m1.120s 00:05:21.429 sys 0m0.458s 00:05:21.429 21:34:53 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.429 21:34:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.429 ************************************ 00:05:21.429 END TEST alias_rpc 00:05:21.429 ************************************ 00:05:21.429 21:34:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:21.429 21:34:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:21.429 21:34:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.429 21:34:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.429 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:05:21.429 ************************************ 00:05:21.429 START TEST spdkcli_tcp 00:05:21.429 ************************************ 00:05:21.429 21:34:53 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:21.689 * Looking for test storage... 00:05:21.689 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:21.689 21:34:53 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:21.689 21:34:53 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:21.689 21:34:53 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:21.689 21:34:53 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.689 21:34:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:21.689 21:34:53 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.689 21:34:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:21.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.689 --rc genhtml_branch_coverage=1 00:05:21.689 --rc genhtml_function_coverage=1 00:05:21.689 --rc genhtml_legend=1 00:05:21.689 --rc geninfo_all_blocks=1 00:05:21.689 --rc geninfo_unexecuted_blocks=1 00:05:21.689 00:05:21.689 ' 00:05:21.689 21:34:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:21.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.689 --rc genhtml_branch_coverage=1 00:05:21.689 --rc genhtml_function_coverage=1 00:05:21.689 --rc genhtml_legend=1 00:05:21.689 --rc geninfo_all_blocks=1 00:05:21.689 --rc geninfo_unexecuted_blocks=1 00:05:21.689 00:05:21.689 ' 00:05:21.689 21:34:53 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:21.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.689 --rc genhtml_branch_coverage=1 00:05:21.689 --rc genhtml_function_coverage=1 00:05:21.689 --rc genhtml_legend=1 00:05:21.689 --rc geninfo_all_blocks=1 00:05:21.689 --rc geninfo_unexecuted_blocks=1 00:05:21.689 00:05:21.689 ' 00:05:21.689 21:34:53 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:21.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.689 --rc genhtml_branch_coverage=1 00:05:21.690 --rc genhtml_function_coverage=1 00:05:21.690 --rc genhtml_legend=1 00:05:21.690 --rc geninfo_all_blocks=1 00:05:21.690 --rc geninfo_unexecuted_blocks=1 00:05:21.690 00:05:21.690 ' 00:05:21.690 21:34:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:21.690 21:34:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:21.690 21:34:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:21.690 21:34:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:21.690 21:34:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:21.690 21:34:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:21.690 21:34:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:21.690 21:34:53 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:21.690 21:34:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.690 21:34:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2848917 00:05:21.690 21:34:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2848917 00:05:21.690 21:34:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:21.690 21:34:53 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2848917 ']' 00:05:21.690 21:34:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.690 21:34:53 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.690 21:34:53 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.690 21:34:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.690 21:34:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.690 [2024-11-29 21:34:53.896707] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:21.690 [2024-11-29 21:34:53.896759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848917 ] 00:05:21.949 [2024-11-29 21:34:53.965402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.949 [2024-11-29 21:34:54.006341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.949 [2024-11-29 21:34:54.006343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.209 21:34:54 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.209 21:34:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:22.209 21:34:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2848927 00:05:22.209 21:34:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:22.209 21:34:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:22.209 [ 00:05:22.209 "bdev_malloc_delete", 00:05:22.209 "bdev_malloc_create", 00:05:22.209 "bdev_null_resize", 00:05:22.209 "bdev_null_delete", 00:05:22.209 "bdev_null_create", 00:05:22.209 "bdev_nvme_cuse_unregister", 00:05:22.209 "bdev_nvme_cuse_register", 00:05:22.209 "bdev_opal_new_user", 00:05:22.209 "bdev_opal_set_lock_state", 00:05:22.209 "bdev_opal_delete", 00:05:22.209 "bdev_opal_get_info", 00:05:22.209 "bdev_opal_create", 00:05:22.209 "bdev_nvme_opal_revert", 00:05:22.209 "bdev_nvme_opal_init", 00:05:22.209 "bdev_nvme_send_cmd", 00:05:22.209 "bdev_nvme_set_keys", 00:05:22.209 "bdev_nvme_get_path_iostat", 00:05:22.209 "bdev_nvme_get_mdns_discovery_info", 00:05:22.209 "bdev_nvme_stop_mdns_discovery", 00:05:22.209 "bdev_nvme_start_mdns_discovery", 00:05:22.209 "bdev_nvme_set_multipath_policy", 00:05:22.209 "bdev_nvme_set_preferred_path", 00:05:22.209 "bdev_nvme_get_io_paths", 00:05:22.209 "bdev_nvme_remove_error_injection", 00:05:22.209 "bdev_nvme_add_error_injection", 00:05:22.209 "bdev_nvme_get_discovery_info", 00:05:22.209 "bdev_nvme_stop_discovery", 00:05:22.209 "bdev_nvme_start_discovery", 00:05:22.209 "bdev_nvme_get_controller_health_info", 00:05:22.209 "bdev_nvme_disable_controller", 00:05:22.209 "bdev_nvme_enable_controller", 00:05:22.209 "bdev_nvme_reset_controller", 00:05:22.209 "bdev_nvme_get_transport_statistics", 00:05:22.209 "bdev_nvme_apply_firmware", 00:05:22.209 "bdev_nvme_detach_controller", 00:05:22.209 "bdev_nvme_get_controllers", 00:05:22.209 "bdev_nvme_attach_controller", 00:05:22.209 "bdev_nvme_set_hotplug", 00:05:22.209 "bdev_nvme_set_options", 00:05:22.209 "bdev_passthru_delete", 00:05:22.209 "bdev_passthru_create", 00:05:22.209 "bdev_lvol_set_parent_bdev", 00:05:22.209 "bdev_lvol_set_parent", 00:05:22.209 "bdev_lvol_check_shallow_copy", 00:05:22.209 "bdev_lvol_start_shallow_copy", 00:05:22.209 "bdev_lvol_grow_lvstore", 00:05:22.209 "bdev_lvol_get_lvols", 00:05:22.209 "bdev_lvol_get_lvstores", 00:05:22.209 "bdev_lvol_delete", 00:05:22.209 "bdev_lvol_set_read_only", 00:05:22.209 "bdev_lvol_resize", 00:05:22.209 "bdev_lvol_decouple_parent", 00:05:22.209 "bdev_lvol_inflate", 00:05:22.209 "bdev_lvol_rename", 00:05:22.209 "bdev_lvol_clone_bdev", 00:05:22.209 "bdev_lvol_clone", 00:05:22.209 "bdev_lvol_snapshot", 00:05:22.209 "bdev_lvol_create", 00:05:22.209 "bdev_lvol_delete_lvstore", 00:05:22.209 "bdev_lvol_rename_lvstore", 00:05:22.209 "bdev_lvol_create_lvstore", 00:05:22.209 "bdev_raid_set_options", 00:05:22.210 "bdev_raid_remove_base_bdev", 00:05:22.210 "bdev_raid_add_base_bdev", 00:05:22.210 "bdev_raid_delete", 00:05:22.210 "bdev_raid_create", 00:05:22.210 "bdev_raid_get_bdevs", 00:05:22.210 "bdev_error_inject_error", 00:05:22.210 "bdev_error_delete", 00:05:22.210 "bdev_error_create", 00:05:22.210 "bdev_split_delete", 00:05:22.210 "bdev_split_create", 00:05:22.210 "bdev_delay_delete", 00:05:22.210 "bdev_delay_create", 00:05:22.210 "bdev_delay_update_latency", 00:05:22.210 "bdev_zone_block_delete", 00:05:22.210 "bdev_zone_block_create", 00:05:22.210 "blobfs_create", 00:05:22.210 "blobfs_detect", 00:05:22.210 "blobfs_set_cache_size", 00:05:22.210 "bdev_aio_delete", 00:05:22.210 "bdev_aio_rescan", 00:05:22.210 "bdev_aio_create", 00:05:22.210 "bdev_ftl_set_property", 00:05:22.210 "bdev_ftl_get_properties", 00:05:22.210 "bdev_ftl_get_stats", 00:05:22.210 "bdev_ftl_unmap", 00:05:22.210 "bdev_ftl_unload", 00:05:22.210 "bdev_ftl_delete", 00:05:22.210 "bdev_ftl_load", 00:05:22.210 "bdev_ftl_create", 00:05:22.210 "bdev_virtio_attach_controller", 00:05:22.210 "bdev_virtio_scsi_get_devices", 00:05:22.210 "bdev_virtio_detach_controller", 00:05:22.210 "bdev_virtio_blk_set_hotplug", 00:05:22.210 "bdev_iscsi_delete", 00:05:22.210 "bdev_iscsi_create", 00:05:22.210 "bdev_iscsi_set_options", 00:05:22.210 "accel_error_inject_error", 00:05:22.210 "ioat_scan_accel_module", 00:05:22.210 "dsa_scan_accel_module", 00:05:22.210 "iaa_scan_accel_module", 00:05:22.210 "keyring_file_remove_key", 00:05:22.210 "keyring_file_add_key", 00:05:22.210 "keyring_linux_set_options", 00:05:22.210 "fsdev_aio_delete", 00:05:22.210 "fsdev_aio_create", 00:05:22.210 "iscsi_get_histogram", 00:05:22.210 "iscsi_enable_histogram", 00:05:22.210 "iscsi_set_options", 00:05:22.210 "iscsi_get_auth_groups", 00:05:22.210 "iscsi_auth_group_remove_secret", 00:05:22.210 "iscsi_auth_group_add_secret", 00:05:22.210 "iscsi_delete_auth_group", 00:05:22.210 "iscsi_create_auth_group", 00:05:22.210 "iscsi_set_discovery_auth", 00:05:22.210 "iscsi_get_options", 00:05:22.210 "iscsi_target_node_request_logout", 00:05:22.210 "iscsi_target_node_set_redirect", 00:05:22.210 "iscsi_target_node_set_auth", 00:05:22.210 "iscsi_target_node_add_lun", 00:05:22.210 "iscsi_get_stats", 00:05:22.210 "iscsi_get_connections", 00:05:22.210 "iscsi_portal_group_set_auth", 00:05:22.210 "iscsi_start_portal_group", 00:05:22.210 "iscsi_delete_portal_group", 00:05:22.210 "iscsi_create_portal_group", 00:05:22.210 "iscsi_get_portal_groups", 00:05:22.210 "iscsi_delete_target_node", 00:05:22.210 "iscsi_target_node_remove_pg_ig_maps", 00:05:22.210 "iscsi_target_node_add_pg_ig_maps", 00:05:22.210 "iscsi_create_target_node", 00:05:22.210 "iscsi_get_target_nodes", 00:05:22.210 "iscsi_delete_initiator_group", 00:05:22.210 "iscsi_initiator_group_remove_initiators", 00:05:22.210 "iscsi_initiator_group_add_initiators", 00:05:22.210 "iscsi_create_initiator_group", 00:05:22.210 "iscsi_get_initiator_groups", 00:05:22.210 "nvmf_set_crdt", 00:05:22.210 "nvmf_set_config", 00:05:22.210 "nvmf_set_max_subsystems", 00:05:22.210 "nvmf_stop_mdns_prr", 00:05:22.210 "nvmf_publish_mdns_prr", 00:05:22.210 "nvmf_subsystem_get_listeners", 00:05:22.210 "nvmf_subsystem_get_qpairs", 00:05:22.210 "nvmf_subsystem_get_controllers", 00:05:22.210 "nvmf_get_stats", 00:05:22.210 "nvmf_get_transports", 00:05:22.210 "nvmf_create_transport", 00:05:22.210 "nvmf_get_targets", 00:05:22.210 "nvmf_delete_target", 00:05:22.210 "nvmf_create_target", 00:05:22.210 "nvmf_subsystem_allow_any_host", 00:05:22.210 "nvmf_subsystem_set_keys", 00:05:22.210 "nvmf_subsystem_remove_host", 00:05:22.210 "nvmf_subsystem_add_host", 00:05:22.210 "nvmf_ns_remove_host", 00:05:22.210 "nvmf_ns_add_host", 00:05:22.210 "nvmf_subsystem_remove_ns", 00:05:22.210 "nvmf_subsystem_set_ns_ana_group", 00:05:22.210 "nvmf_subsystem_add_ns", 00:05:22.210 "nvmf_subsystem_listener_set_ana_state", 00:05:22.210 "nvmf_discovery_get_referrals", 00:05:22.210 "nvmf_discovery_remove_referral", 00:05:22.210 "nvmf_discovery_add_referral", 00:05:22.210 "nvmf_subsystem_remove_listener", 00:05:22.210 "nvmf_subsystem_add_listener", 00:05:22.210 "nvmf_delete_subsystem", 00:05:22.210 "nvmf_create_subsystem", 00:05:22.210 "nvmf_get_subsystems", 00:05:22.210 "env_dpdk_get_mem_stats", 00:05:22.210 "nbd_get_disks", 00:05:22.210 "nbd_stop_disk", 00:05:22.210 "nbd_start_disk", 00:05:22.210 "ublk_recover_disk", 00:05:22.210 "ublk_get_disks", 00:05:22.210 "ublk_stop_disk", 00:05:22.210 "ublk_start_disk", 00:05:22.210 "ublk_destroy_target", 00:05:22.210 "ublk_create_target", 00:05:22.210 "virtio_blk_create_transport", 00:05:22.210 "virtio_blk_get_transports", 00:05:22.210 "vhost_controller_set_coalescing", 00:05:22.210 "vhost_get_controllers", 00:05:22.210 "vhost_delete_controller", 00:05:22.210 "vhost_create_blk_controller", 00:05:22.210 "vhost_scsi_controller_remove_target", 00:05:22.210 "vhost_scsi_controller_add_target", 00:05:22.210 "vhost_start_scsi_controller", 00:05:22.210 "vhost_create_scsi_controller", 00:05:22.210 "thread_set_cpumask", 00:05:22.210 "scheduler_set_options", 00:05:22.210 "framework_get_governor", 00:05:22.210 "framework_get_scheduler", 00:05:22.210 "framework_set_scheduler", 00:05:22.210 "framework_get_reactors", 00:05:22.210 "thread_get_io_channels", 00:05:22.210 "thread_get_pollers", 00:05:22.210 "thread_get_stats", 00:05:22.210 "framework_monitor_context_switch", 00:05:22.210 "spdk_kill_instance", 00:05:22.210 "log_enable_timestamps", 00:05:22.210 "log_get_flags", 00:05:22.210 "log_clear_flag", 00:05:22.210 "log_set_flag", 00:05:22.210 "log_get_level", 00:05:22.210 "log_set_level", 00:05:22.210 "log_get_print_level", 00:05:22.210 "log_set_print_level", 00:05:22.210 "framework_enable_cpumask_locks", 00:05:22.210 "framework_disable_cpumask_locks", 00:05:22.210 "framework_wait_init", 00:05:22.210 "framework_start_init", 00:05:22.210 "scsi_get_devices", 00:05:22.210 "bdev_get_histogram", 00:05:22.210 "bdev_enable_histogram", 00:05:22.210 "bdev_set_qos_limit", 00:05:22.210 "bdev_set_qd_sampling_period", 00:05:22.210 "bdev_get_bdevs", 00:05:22.210 "bdev_reset_iostat", 00:05:22.210 "bdev_get_iostat", 00:05:22.210 "bdev_examine", 00:05:22.210 "bdev_wait_for_examine", 00:05:22.210 "bdev_set_options", 00:05:22.210 "accel_get_stats", 00:05:22.210 "accel_set_options", 00:05:22.210 "accel_set_driver", 00:05:22.210 "accel_crypto_key_destroy", 00:05:22.210 "accel_crypto_keys_get", 00:05:22.210 "accel_crypto_key_create", 00:05:22.210 "accel_assign_opc", 00:05:22.210 "accel_get_module_info", 00:05:22.210 "accel_get_opc_assignments", 00:05:22.210 "vmd_rescan", 00:05:22.210 "vmd_remove_device", 00:05:22.210 "vmd_enable", 00:05:22.210 "sock_get_default_impl", 00:05:22.210 "sock_set_default_impl", 00:05:22.210 "sock_impl_set_options", 00:05:22.210 "sock_impl_get_options", 00:05:22.210 "iobuf_get_stats", 00:05:22.210 "iobuf_set_options", 00:05:22.210 "keyring_get_keys", 00:05:22.210 "framework_get_pci_devices", 00:05:22.210 "framework_get_config", 00:05:22.210 "framework_get_subsystems", 00:05:22.210 "fsdev_set_opts", 00:05:22.210 "fsdev_get_opts", 00:05:22.210 "trace_get_info", 00:05:22.210 "trace_get_tpoint_group_mask", 00:05:22.210 "trace_disable_tpoint_group", 00:05:22.210 "trace_enable_tpoint_group", 00:05:22.210 "trace_clear_tpoint_mask", 00:05:22.210 "trace_set_tpoint_mask", 00:05:22.210 "notify_get_notifications", 00:05:22.210 "notify_get_types", 00:05:22.210 "spdk_get_version", 00:05:22.210 "rpc_get_methods" 00:05:22.210 ] 00:05:22.210 21:34:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:22.210 21:34:54 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.210 21:34:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.210 21:34:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:22.210 21:34:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2848917 00:05:22.210 21:34:54 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2848917 ']' 00:05:22.210 21:34:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2848917 00:05:22.210 21:34:54 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:22.210 21:34:54 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.210 21:34:54 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2848917 00:05:22.470 21:34:54 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.470 21:34:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.470 21:34:54 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2848917' 00:05:22.470 killing process with pid 2848917 00:05:22.470 21:34:54 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2848917 00:05:22.470 21:34:54 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2848917 00:05:22.731 00:05:22.731 real 0m1.176s 00:05:22.731 user 0m1.905s 00:05:22.731 sys 0m0.508s 00:05:22.731 21:34:54 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.731 21:34:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.731 ************************************ 00:05:22.731 END TEST spdkcli_tcp 00:05:22.731 ************************************ 00:05:22.731 21:34:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.731 21:34:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.731 21:34:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.731 21:34:54 -- common/autotest_common.sh@10 -- # set +x 00:05:22.731 ************************************ 00:05:22.731 START TEST dpdk_mem_utility 00:05:22.731 ************************************ 00:05:22.731 21:34:54 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.992 * Looking for test storage... 00:05:22.992 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:22.992 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:22.992 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:22.992 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:22.992 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:22.992 21:34:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.993 21:34:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:22.993 21:34:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:22.993 21:34:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.993 21:34:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:22.993 21:34:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.993 21:34:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.993 21:34:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.993 21:34:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:22.993 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.993 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:22.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.993 --rc genhtml_branch_coverage=1 00:05:22.993 --rc genhtml_function_coverage=1 00:05:22.993 --rc genhtml_legend=1 00:05:22.993 --rc geninfo_all_blocks=1 00:05:22.993 --rc geninfo_unexecuted_blocks=1 00:05:22.993 00:05:22.993 ' 00:05:22.993 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:22.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.993 --rc genhtml_branch_coverage=1 00:05:22.993 --rc genhtml_function_coverage=1 00:05:22.993 --rc genhtml_legend=1 00:05:22.993 --rc geninfo_all_blocks=1 00:05:22.993 --rc geninfo_unexecuted_blocks=1 00:05:22.993 00:05:22.993 ' 00:05:22.993 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:22.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.993 --rc genhtml_branch_coverage=1 00:05:22.993 --rc genhtml_function_coverage=1 00:05:22.993 --rc genhtml_legend=1 00:05:22.993 --rc geninfo_all_blocks=1 00:05:22.993 --rc geninfo_unexecuted_blocks=1 00:05:22.993 00:05:22.993 ' 00:05:22.993 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:22.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.993 --rc genhtml_branch_coverage=1 00:05:22.993 --rc genhtml_function_coverage=1 00:05:22.993 --rc genhtml_legend=1 00:05:22.993 --rc geninfo_all_blocks=1 00:05:22.993 --rc geninfo_unexecuted_blocks=1 00:05:22.993 00:05:22.993 ' 00:05:22.993 21:34:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:22.993 21:34:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.993 21:34:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2849257 00:05:22.993 21:34:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2849257 00:05:22.993 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2849257 ']' 00:05:22.993 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.993 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.993 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.993 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.993 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.993 [2024-11-29 21:34:55.136148] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:22.993 [2024-11-29 21:34:55.136205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849257 ] 00:05:22.993 [2024-11-29 21:34:55.206565] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.252 [2024-11-29 21:34:55.246142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.252 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.252 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:23.252 21:34:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:23.252 21:34:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:23.252 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.252 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.252 { 00:05:23.252 "filename": "/tmp/spdk_mem_dump.txt" 00:05:23.252 } 00:05:23.252 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.252 21:34:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:23.252 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:23.252 1 heaps totaling size 860.000000 MiB 00:05:23.252 size: 860.000000 MiB heap id: 0 00:05:23.252 end heaps---------- 00:05:23.252 9 mempools totaling size 642.649841 MiB 00:05:23.252 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:23.252 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:23.252 size: 92.545471 MiB name: bdev_io_2849257 00:05:23.252 size: 51.011292 MiB name: evtpool_2849257 00:05:23.252 size: 50.003479 MiB name: msgpool_2849257 00:05:23.252 size: 36.509338 MiB name: fsdev_io_2849257 00:05:23.252 size: 21.763794 MiB name: PDU_Pool 00:05:23.252 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:23.252 size: 0.026123 MiB name: Session_Pool 00:05:23.252 end mempools------- 00:05:23.252 6 memzones totaling size 4.142822 MiB 00:05:23.252 size: 1.000366 MiB name: RG_ring_0_2849257 00:05:23.252 size: 1.000366 MiB name: RG_ring_1_2849257 00:05:23.252 size: 1.000366 MiB name: RG_ring_4_2849257 00:05:23.252 size: 1.000366 MiB name: RG_ring_5_2849257 00:05:23.252 size: 0.125366 MiB name: RG_ring_2_2849257 00:05:23.252 size: 0.015991 MiB name: RG_ring_3_2849257 00:05:23.252 end memzones------- 00:05:23.252 21:34:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:23.523 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:23.523 list of free elements. size: 13.984680 MiB 00:05:23.523 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:23.523 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:23.523 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:23.523 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:23.523 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:23.523 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:23.523 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:23.523 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:23.523 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:23.523 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:23.523 element at address: 0x200003e00000 with size: 0.495605 MiB 00:05:23.523 element at address: 0x20000d800000 with size: 0.490723 MiB 00:05:23.523 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:23.523 element at address: 0x200007000000 with size: 0.481934 MiB 00:05:23.523 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:23.523 element at address: 0x200003a00000 with size: 0.354858 MiB 00:05:23.523 list of standard malloc elements. size: 199.218628 MiB 00:05:23.523 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:23.523 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:23.523 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:23.523 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:23.523 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:23.524 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:23.524 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:23.524 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:23.524 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:23.524 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:23.524 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:23.524 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:23.524 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:23.524 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:23.524 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:23.524 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:23.524 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:05:23.524 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:23.524 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:05:23.524 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:23.524 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:23.524 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:23.524 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:23.524 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:23.524 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:23.524 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:23.524 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:23.524 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:23.524 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:23.524 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:23.524 list of memzone associated elements. size: 646.796692 MiB 00:05:23.524 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:23.524 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:23.524 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:23.524 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:23.524 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:23.524 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2849257_0 00:05:23.524 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:23.524 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2849257_0 00:05:23.524 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:23.524 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2849257_0 00:05:23.524 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:23.524 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2849257_0 00:05:23.524 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:23.524 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:23.524 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:23.524 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:23.524 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:23.524 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2849257 00:05:23.524 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:23.524 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2849257 00:05:23.524 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:23.524 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2849257 00:05:23.524 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:23.524 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:23.524 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:23.524 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:23.524 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:23.524 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:23.524 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:23.524 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:23.524 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:23.524 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2849257 00:05:23.524 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:23.524 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2849257 00:05:23.524 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:23.524 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2849257 00:05:23.524 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:23.524 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2849257 00:05:23.524 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:23.524 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2849257 00:05:23.524 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:23.524 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2849257 00:05:23.524 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:23.524 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:23.524 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:23.524 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:23.524 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:23.524 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:23.524 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:05:23.524 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2849257 00:05:23.524 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:23.524 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:23.524 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:23.524 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:23.524 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:05:23.524 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2849257 00:05:23.524 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:23.524 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:23.524 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:23.524 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2849257 00:05:23.524 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:23.524 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2849257 00:05:23.524 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:05:23.524 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2849257 00:05:23.524 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:23.524 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:23.524 21:34:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:23.524 21:34:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2849257 00:05:23.524 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2849257 ']' 00:05:23.524 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2849257 00:05:23.524 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:23.524 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.524 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2849257 00:05:23.524 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.524 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.524 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2849257' 00:05:23.524 killing process with pid 2849257 00:05:23.524 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2849257 00:05:23.524 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2849257 00:05:23.784 00:05:23.784 real 0m1.016s 00:05:23.784 user 0m0.914s 00:05:23.784 sys 0m0.453s 00:05:23.784 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.784 21:34:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.784 ************************************ 00:05:23.784 END TEST dpdk_mem_utility 00:05:23.784 ************************************ 00:05:23.784 21:34:55 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:23.784 21:34:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.784 21:34:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.784 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:05:23.784 ************************************ 00:05:23.784 START TEST event 00:05:23.784 ************************************ 00:05:23.784 21:34:55 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:24.043 * Looking for test storage... 00:05:24.043 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:24.043 21:34:56 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:24.043 21:34:56 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:24.043 21:34:56 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:24.043 21:34:56 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:24.043 21:34:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.043 21:34:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.043 21:34:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.043 21:34:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.044 21:34:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.044 21:34:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.044 21:34:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.044 21:34:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.044 21:34:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.044 21:34:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.044 21:34:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.044 21:34:56 event -- scripts/common.sh@344 -- # case "$op" in 00:05:24.044 21:34:56 event -- scripts/common.sh@345 -- # : 1 00:05:24.044 21:34:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.044 21:34:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.044 21:34:56 event -- scripts/common.sh@365 -- # decimal 1 00:05:24.044 21:34:56 event -- scripts/common.sh@353 -- # local d=1 00:05:24.044 21:34:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.044 21:34:56 event -- scripts/common.sh@355 -- # echo 1 00:05:24.044 21:34:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.044 21:34:56 event -- scripts/common.sh@366 -- # decimal 2 00:05:24.044 21:34:56 event -- scripts/common.sh@353 -- # local d=2 00:05:24.044 21:34:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.044 21:34:56 event -- scripts/common.sh@355 -- # echo 2 00:05:24.044 21:34:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.044 21:34:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.044 21:34:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.044 21:34:56 event -- scripts/common.sh@368 -- # return 0 00:05:24.044 21:34:56 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.044 21:34:56 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:24.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.044 --rc genhtml_branch_coverage=1 00:05:24.044 --rc genhtml_function_coverage=1 00:05:24.044 --rc genhtml_legend=1 00:05:24.044 --rc geninfo_all_blocks=1 00:05:24.044 --rc geninfo_unexecuted_blocks=1 00:05:24.044 00:05:24.044 ' 00:05:24.044 21:34:56 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:24.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.044 --rc genhtml_branch_coverage=1 00:05:24.044 --rc genhtml_function_coverage=1 00:05:24.044 --rc genhtml_legend=1 00:05:24.044 --rc geninfo_all_blocks=1 00:05:24.044 --rc geninfo_unexecuted_blocks=1 00:05:24.044 00:05:24.044 ' 00:05:24.044 21:34:56 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:24.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.044 --rc genhtml_branch_coverage=1 00:05:24.044 --rc genhtml_function_coverage=1 00:05:24.044 --rc genhtml_legend=1 00:05:24.044 --rc geninfo_all_blocks=1 00:05:24.044 --rc geninfo_unexecuted_blocks=1 00:05:24.044 00:05:24.044 ' 00:05:24.044 21:34:56 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:24.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.044 --rc genhtml_branch_coverage=1 00:05:24.044 --rc genhtml_function_coverage=1 00:05:24.044 --rc genhtml_legend=1 00:05:24.044 --rc geninfo_all_blocks=1 00:05:24.044 --rc geninfo_unexecuted_blocks=1 00:05:24.044 00:05:24.044 ' 00:05:24.044 21:34:56 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:24.044 21:34:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:24.044 21:34:56 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:24.044 21:34:56 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:24.044 21:34:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.044 21:34:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.044 ************************************ 00:05:24.044 START TEST event_perf 00:05:24.044 ************************************ 00:05:24.044 21:34:56 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:24.044 Running I/O for 1 seconds...[2024-11-29 21:34:56.223565] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:24.044 [2024-11-29 21:34:56.223645] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849514 ] 00:05:24.303 [2024-11-29 21:34:56.298695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.303 [2024-11-29 21:34:56.340561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.303 [2024-11-29 21:34:56.340715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.303 [2024-11-29 21:34:56.340737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.303 [2024-11-29 21:34:56.340739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.238 Running I/O for 1 seconds... 00:05:25.238 lcore 0: 207174 00:05:25.238 lcore 1: 207173 00:05:25.238 lcore 2: 207174 00:05:25.238 lcore 3: 207174 00:05:25.238 done. 00:05:25.238 00:05:25.238 real 0m1.198s 00:05:25.238 user 0m4.104s 00:05:25.238 sys 0m0.093s 00:05:25.238 21:34:57 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.238 21:34:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.238 ************************************ 00:05:25.238 END TEST event_perf 00:05:25.238 ************************************ 00:05:25.238 21:34:57 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:25.238 21:34:57 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:25.238 21:34:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.238 21:34:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.238 ************************************ 00:05:25.238 START TEST event_reactor 00:05:25.238 ************************************ 00:05:25.238 21:34:57 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:25.496 [2024-11-29 21:34:57.486279] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:25.497 [2024-11-29 21:34:57.486353] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849670 ] 00:05:25.497 [2024-11-29 21:34:57.559201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.497 [2024-11-29 21:34:57.597367] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.432 test_start 00:05:26.432 oneshot 00:05:26.432 tick 100 00:05:26.432 tick 100 00:05:26.432 tick 250 00:05:26.432 tick 100 00:05:26.432 tick 100 00:05:26.432 tick 250 00:05:26.432 tick 100 00:05:26.432 tick 500 00:05:26.432 tick 100 00:05:26.432 tick 100 00:05:26.432 tick 250 00:05:26.432 tick 100 00:05:26.432 tick 100 00:05:26.432 test_end 00:05:26.432 00:05:26.432 real 0m1.188s 00:05:26.432 user 0m1.100s 00:05:26.432 sys 0m0.085s 00:05:26.432 21:34:58 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.432 21:34:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:26.432 ************************************ 00:05:26.432 END TEST event_reactor 00:05:26.432 ************************************ 00:05:26.691 21:34:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.691 21:34:58 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:26.691 21:34:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.691 21:34:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.691 ************************************ 00:05:26.691 START TEST event_reactor_perf 00:05:26.691 ************************************ 00:05:26.691 21:34:58 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.691 [2024-11-29 21:34:58.755820] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:26.691 [2024-11-29 21:34:58.755905] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849911 ] 00:05:26.691 [2024-11-29 21:34:58.828841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.691 [2024-11-29 21:34:58.867898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.071 test_start 00:05:28.072 test_end 00:05:28.072 Performance: 526453 events per second 00:05:28.072 00:05:28.072 real 0m1.190s 00:05:28.072 user 0m1.095s 00:05:28.072 sys 0m0.091s 00:05:28.072 21:34:59 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.072 21:34:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.072 ************************************ 00:05:28.072 END TEST event_reactor_perf 00:05:28.072 ************************************ 00:05:28.072 21:34:59 event -- event/event.sh@49 -- # uname -s 00:05:28.072 21:34:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:28.072 21:34:59 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:28.072 21:34:59 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.072 21:34:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.072 21:34:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.072 ************************************ 00:05:28.072 START TEST event_scheduler 00:05:28.072 ************************************ 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:28.072 * Looking for test storage... 00:05:28.072 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.072 21:35:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:28.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.072 --rc genhtml_branch_coverage=1 00:05:28.072 --rc genhtml_function_coverage=1 00:05:28.072 --rc genhtml_legend=1 00:05:28.072 --rc geninfo_all_blocks=1 00:05:28.072 --rc geninfo_unexecuted_blocks=1 00:05:28.072 00:05:28.072 ' 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:28.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.072 --rc genhtml_branch_coverage=1 00:05:28.072 --rc genhtml_function_coverage=1 00:05:28.072 --rc genhtml_legend=1 00:05:28.072 --rc geninfo_all_blocks=1 00:05:28.072 --rc geninfo_unexecuted_blocks=1 00:05:28.072 00:05:28.072 ' 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:28.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.072 --rc genhtml_branch_coverage=1 00:05:28.072 --rc genhtml_function_coverage=1 00:05:28.072 --rc genhtml_legend=1 00:05:28.072 --rc geninfo_all_blocks=1 00:05:28.072 --rc geninfo_unexecuted_blocks=1 00:05:28.072 00:05:28.072 ' 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:28.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.072 --rc genhtml_branch_coverage=1 00:05:28.072 --rc genhtml_function_coverage=1 00:05:28.072 --rc genhtml_legend=1 00:05:28.072 --rc geninfo_all_blocks=1 00:05:28.072 --rc geninfo_unexecuted_blocks=1 00:05:28.072 00:05:28.072 ' 00:05:28.072 21:35:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:28.072 21:35:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2850227 00:05:28.072 21:35:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.072 21:35:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:28.072 21:35:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2850227 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2850227 ']' 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.072 21:35:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.072 [2024-11-29 21:35:00.255573] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:28.072 [2024-11-29 21:35:00.255629] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850227 ] 00:05:28.332 [2024-11-29 21:35:00.322751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.332 [2024-11-29 21:35:00.365208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.332 [2024-11-29 21:35:00.366681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.332 [2024-11-29 21:35:00.366704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.332 [2024-11-29 21:35:00.366707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.332 21:35:00 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.332 21:35:00 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:28.332 21:35:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:28.332 21:35:00 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.332 21:35:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.332 [2024-11-29 21:35:00.415333] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:28.332 [2024-11-29 21:35:00.415352] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:28.332 [2024-11-29 21:35:00.415364] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:28.332 [2024-11-29 21:35:00.415371] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:28.333 [2024-11-29 21:35:00.415378] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:28.333 21:35:00 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.333 21:35:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:28.333 21:35:00 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.333 21:35:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.333 [2024-11-29 21:35:00.485182] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:28.333 21:35:00 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.333 21:35:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:28.333 21:35:00 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.333 21:35:00 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.333 21:35:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.333 ************************************ 00:05:28.333 START TEST scheduler_create_thread 00:05:28.333 ************************************ 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.333 2 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.333 3 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.333 4 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.333 5 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.333 6 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.333 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.593 7 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.593 8 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.593 9 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.593 10 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.593 21:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.973 21:35:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.973 21:35:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:29.973 21:35:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:29.973 21:35:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.973 21:35:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.910 21:35:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.910 21:35:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:30.910 21:35:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.910 21:35:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.479 21:35:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.479 21:35:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:31.479 21:35:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:31.479 21:35:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.479 21:35:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.416 21:35:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.416 00:05:32.416 real 0m3.892s 00:05:32.416 user 0m0.022s 00:05:32.416 sys 0m0.009s 00:05:32.416 21:35:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.416 21:35:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.416 ************************************ 00:05:32.416 END TEST scheduler_create_thread 00:05:32.416 ************************************ 00:05:32.416 21:35:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:32.416 21:35:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2850227 00:05:32.416 21:35:04 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2850227 ']' 00:05:32.416 21:35:04 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2850227 00:05:32.416 21:35:04 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:32.416 21:35:04 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.416 21:35:04 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2850227 00:05:32.416 21:35:04 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:32.416 21:35:04 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:32.416 21:35:04 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2850227' 00:05:32.416 killing process with pid 2850227 00:05:32.416 21:35:04 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2850227 00:05:32.416 21:35:04 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2850227 00:05:32.738 [2024-11-29 21:35:04.796668] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:33.025 00:05:33.026 real 0m5.048s 00:05:33.026 user 0m9.501s 00:05:33.026 sys 0m0.421s 00:05:33.026 21:35:05 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.026 21:35:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.026 ************************************ 00:05:33.026 END TEST event_scheduler 00:05:33.026 ************************************ 00:05:33.026 21:35:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:33.026 21:35:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:33.026 21:35:05 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.026 21:35:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.026 21:35:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.026 ************************************ 00:05:33.026 START TEST app_repeat 00:05:33.026 ************************************ 00:05:33.026 21:35:05 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2851112 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2851112' 00:05:33.026 Process app_repeat pid: 2851112 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:33.026 spdk_app_start Round 0 00:05:33.026 21:35:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2851112 /var/tmp/spdk-nbd.sock 00:05:33.026 21:35:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2851112 ']' 00:05:33.026 21:35:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.026 21:35:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.026 21:35:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.026 21:35:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.026 21:35:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.026 [2024-11-29 21:35:05.167845] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:33.026 [2024-11-29 21:35:05.167911] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851112 ] 00:05:33.026 [2024-11-29 21:35:05.239545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.305 [2024-11-29 21:35:05.281027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.305 [2024-11-29 21:35:05.281031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.305 21:35:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.305 21:35:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:33.305 21:35:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.305 Malloc0 00:05:33.564 21:35:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.564 Malloc1 00:05:33.564 21:35:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.564 21:35:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.823 /dev/nbd0 00:05:33.823 21:35:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.823 21:35:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.823 1+0 records in 00:05:33.823 1+0 records out 00:05:33.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260383 s, 15.7 MB/s 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:33.823 21:35:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:33.823 21:35:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.823 21:35:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.824 21:35:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.083 /dev/nbd1 00:05:34.083 21:35:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.083 21:35:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.083 1+0 records in 00:05:34.083 1+0 records out 00:05:34.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224928 s, 18.2 MB/s 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:34.083 21:35:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:34.083 21:35:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.083 21:35:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.083 21:35:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.083 21:35:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.083 21:35:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.343 { 00:05:34.343 "nbd_device": "/dev/nbd0", 00:05:34.343 "bdev_name": "Malloc0" 00:05:34.343 }, 00:05:34.343 { 00:05:34.343 "nbd_device": "/dev/nbd1", 00:05:34.343 "bdev_name": "Malloc1" 00:05:34.343 } 00:05:34.343 ]' 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.343 { 00:05:34.343 "nbd_device": "/dev/nbd0", 00:05:34.343 "bdev_name": "Malloc0" 00:05:34.343 }, 00:05:34.343 { 00:05:34.343 "nbd_device": "/dev/nbd1", 00:05:34.343 "bdev_name": "Malloc1" 00:05:34.343 } 00:05:34.343 ]' 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.343 /dev/nbd1' 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.343 /dev/nbd1' 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.343 256+0 records in 00:05:34.343 256+0 records out 00:05:34.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011104 s, 94.4 MB/s 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.343 256+0 records in 00:05:34.343 256+0 records out 00:05:34.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191717 s, 54.7 MB/s 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.343 256+0 records in 00:05:34.343 256+0 records out 00:05:34.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199924 s, 52.4 MB/s 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.343 21:35:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.603 21:35:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.603 21:35:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.603 21:35:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.603 21:35:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.603 21:35:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.603 21:35:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.603 21:35:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.603 21:35:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.603 21:35:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.603 21:35:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.863 21:35:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.863 21:35:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.863 21:35:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.863 21:35:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.863 21:35:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.863 21:35:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.863 21:35:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.863 21:35:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.863 21:35:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.863 21:35:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.863 21:35:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.123 21:35:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.123 21:35:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.123 21:35:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.123 21:35:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.123 21:35:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.123 21:35:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.123 21:35:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.123 21:35:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.123 21:35:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.123 21:35:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.123 21:35:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.123 21:35:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.123 21:35:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.383 21:35:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.383 [2024-11-29 21:35:07.615446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.642 [2024-11-29 21:35:07.651097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.642 [2024-11-29 21:35:07.651098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.642 [2024-11-29 21:35:07.691725] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.642 [2024-11-29 21:35:07.691769] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.931 21:35:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.931 21:35:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:38.931 spdk_app_start Round 1 00:05:38.931 21:35:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2851112 /var/tmp/spdk-nbd.sock 00:05:38.931 21:35:10 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2851112 ']' 00:05:38.931 21:35:10 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.931 21:35:10 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.931 21:35:10 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.931 21:35:10 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.931 21:35:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.931 21:35:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.931 21:35:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:38.931 21:35:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.931 Malloc0 00:05:38.931 21:35:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.931 Malloc1 00:05:38.931 21:35:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.931 21:35:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.191 /dev/nbd0 00:05:39.191 21:35:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.191 21:35:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.191 1+0 records in 00:05:39.191 1+0 records out 00:05:39.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230348 s, 17.8 MB/s 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:39.191 21:35:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:39.191 21:35:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.191 21:35:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.191 21:35:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.451 /dev/nbd1 00:05:39.451 21:35:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.451 21:35:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.451 1+0 records in 00:05:39.451 1+0 records out 00:05:39.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248326 s, 16.5 MB/s 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:39.451 21:35:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:39.451 21:35:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.451 21:35:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.451 21:35:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.451 21:35:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.451 21:35:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.710 { 00:05:39.710 "nbd_device": "/dev/nbd0", 00:05:39.710 "bdev_name": "Malloc0" 00:05:39.710 }, 00:05:39.710 { 00:05:39.710 "nbd_device": "/dev/nbd1", 00:05:39.710 "bdev_name": "Malloc1" 00:05:39.710 } 00:05:39.710 ]' 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.710 { 00:05:39.710 "nbd_device": "/dev/nbd0", 00:05:39.710 "bdev_name": "Malloc0" 00:05:39.710 }, 00:05:39.710 { 00:05:39.710 "nbd_device": "/dev/nbd1", 00:05:39.710 "bdev_name": "Malloc1" 00:05:39.710 } 00:05:39.710 ]' 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.710 /dev/nbd1' 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.710 /dev/nbd1' 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.710 256+0 records in 00:05:39.710 256+0 records out 00:05:39.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103423 s, 101 MB/s 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.710 256+0 records in 00:05:39.710 256+0 records out 00:05:39.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192846 s, 54.4 MB/s 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.710 21:35:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.710 256+0 records in 00:05:39.710 256+0 records out 00:05:39.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201313 s, 52.1 MB/s 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.711 21:35:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.970 21:35:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.970 21:35:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.970 21:35:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.970 21:35:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.970 21:35:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.970 21:35:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.970 21:35:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.970 21:35:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.970 21:35:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.970 21:35:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.230 21:35:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.489 21:35:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.489 21:35:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.489 21:35:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.489 21:35:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.489 21:35:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.489 21:35:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.489 21:35:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.489 21:35:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.489 21:35:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.489 21:35:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.489 21:35:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.749 [2024-11-29 21:35:12.864858] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.749 [2024-11-29 21:35:12.899782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.749 [2024-11-29 21:35:12.899783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.749 [2024-11-29 21:35:12.941337] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.749 [2024-11-29 21:35:12.941379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.035 21:35:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.036 21:35:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:44.036 spdk_app_start Round 2 00:05:44.036 21:35:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2851112 /var/tmp/spdk-nbd.sock 00:05:44.036 21:35:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2851112 ']' 00:05:44.036 21:35:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.036 21:35:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.036 21:35:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.036 21:35:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.036 21:35:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.036 21:35:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.036 21:35:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:44.036 21:35:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.036 Malloc0 00:05:44.036 21:35:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.036 Malloc1 00:05:44.295 21:35:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.295 /dev/nbd0 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.295 21:35:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.295 21:35:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:44.296 21:35:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:44.296 21:35:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.296 21:35:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.296 21:35:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:44.296 21:35:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:44.296 21:35:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.296 21:35:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.296 21:35:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.296 1+0 records in 00:05:44.296 1+0 records out 00:05:44.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222197 s, 18.4 MB/s 00:05:44.296 21:35:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:44.555 21:35:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.555 21:35:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.555 21:35:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.555 /dev/nbd1 00:05:44.555 21:35:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.555 21:35:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.555 1+0 records in 00:05:44.555 1+0 records out 00:05:44.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251389 s, 16.3 MB/s 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.555 21:35:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:44.555 21:35:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.555 21:35:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.555 21:35:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.555 21:35:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.555 21:35:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.814 21:35:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.814 { 00:05:44.814 "nbd_device": "/dev/nbd0", 00:05:44.814 "bdev_name": "Malloc0" 00:05:44.814 }, 00:05:44.814 { 00:05:44.814 "nbd_device": "/dev/nbd1", 00:05:44.814 "bdev_name": "Malloc1" 00:05:44.814 } 00:05:44.814 ]' 00:05:44.814 21:35:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.814 21:35:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.814 { 00:05:44.814 "nbd_device": "/dev/nbd0", 00:05:44.814 "bdev_name": "Malloc0" 00:05:44.814 }, 00:05:44.814 { 00:05:44.814 "nbd_device": "/dev/nbd1", 00:05:44.814 "bdev_name": "Malloc1" 00:05:44.814 } 00:05:44.814 ]' 00:05:44.814 21:35:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.814 /dev/nbd1' 00:05:44.814 21:35:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.814 21:35:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.814 /dev/nbd1' 00:05:44.814 21:35:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.814 21:35:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.814 21:35:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.814 21:35:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.814 21:35:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.815 21:35:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.815 21:35:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.815 21:35:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.815 21:35:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.815 21:35:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.815 21:35:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.815 256+0 records in 00:05:44.815 256+0 records out 00:05:44.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108142 s, 97.0 MB/s 00:05:44.815 21:35:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.815 21:35:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.074 256+0 records in 00:05:45.074 256+0 records out 00:05:45.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195027 s, 53.8 MB/s 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.074 256+0 records in 00:05:45.074 256+0 records out 00:05:45.074 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200117 s, 52.4 MB/s 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.074 21:35:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.333 21:35:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.333 21:35:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.333 21:35:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.333 21:35:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.333 21:35:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.333 21:35:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.333 21:35:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.333 21:35:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.333 21:35:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.333 21:35:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.333 21:35:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.334 21:35:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.334 21:35:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.334 21:35:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.334 21:35:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.334 21:35:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.334 21:35:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.334 21:35:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.334 21:35:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.334 21:35:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.334 21:35:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.593 21:35:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.593 21:35:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.593 21:35:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.593 21:35:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.593 21:35:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.593 21:35:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.593 21:35:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.593 21:35:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.593 21:35:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.593 21:35:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.593 21:35:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.593 21:35:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.593 21:35:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.852 21:35:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.110 [2024-11-29 21:35:18.160018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.110 [2024-11-29 21:35:18.194936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.111 [2024-11-29 21:35:18.194938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.111 [2024-11-29 21:35:18.234990] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.111 [2024-11-29 21:35:18.235032] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.400 21:35:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2851112 /var/tmp/spdk-nbd.sock 00:05:49.400 21:35:20 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2851112 ']' 00:05:49.400 21:35:20 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.400 21:35:20 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.400 21:35:20 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.400 21:35:20 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.400 21:35:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:49.400 21:35:21 event.app_repeat -- event/event.sh@39 -- # killprocess 2851112 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2851112 ']' 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2851112 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2851112 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2851112' 00:05:49.400 killing process with pid 2851112 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2851112 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2851112 00:05:49.400 spdk_app_start is called in Round 0. 00:05:49.400 Shutdown signal received, stop current app iteration 00:05:49.400 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:05:49.400 spdk_app_start is called in Round 1. 00:05:49.400 Shutdown signal received, stop current app iteration 00:05:49.400 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:05:49.400 spdk_app_start is called in Round 2. 00:05:49.400 Shutdown signal received, stop current app iteration 00:05:49.400 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:05:49.400 spdk_app_start is called in Round 3. 00:05:49.400 Shutdown signal received, stop current app iteration 00:05:49.400 21:35:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:49.400 21:35:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:49.400 00:05:49.400 real 0m16.270s 00:05:49.400 user 0m35.069s 00:05:49.400 sys 0m3.057s 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.400 21:35:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.400 ************************************ 00:05:49.400 END TEST app_repeat 00:05:49.400 ************************************ 00:05:49.400 21:35:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:49.400 21:35:21 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:49.400 21:35:21 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.400 21:35:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.400 21:35:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.400 ************************************ 00:05:49.400 START TEST cpu_locks 00:05:49.400 ************************************ 00:05:49.400 21:35:21 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:49.400 * Looking for test storage... 00:05:49.400 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:49.400 21:35:21 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.400 21:35:21 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.400 21:35:21 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.660 21:35:21 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.660 21:35:21 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:49.660 21:35:21 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.660 21:35:21 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.660 --rc genhtml_branch_coverage=1 00:05:49.660 --rc genhtml_function_coverage=1 00:05:49.660 --rc genhtml_legend=1 00:05:49.660 --rc geninfo_all_blocks=1 00:05:49.660 --rc geninfo_unexecuted_blocks=1 00:05:49.660 00:05:49.660 ' 00:05:49.660 21:35:21 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.660 --rc genhtml_branch_coverage=1 00:05:49.660 --rc genhtml_function_coverage=1 00:05:49.660 --rc genhtml_legend=1 00:05:49.660 --rc geninfo_all_blocks=1 00:05:49.660 --rc geninfo_unexecuted_blocks=1 00:05:49.660 00:05:49.660 ' 00:05:49.660 21:35:21 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.660 --rc genhtml_branch_coverage=1 00:05:49.660 --rc genhtml_function_coverage=1 00:05:49.660 --rc genhtml_legend=1 00:05:49.660 --rc geninfo_all_blocks=1 00:05:49.660 --rc geninfo_unexecuted_blocks=1 00:05:49.660 00:05:49.660 ' 00:05:49.660 21:35:21 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.660 --rc genhtml_branch_coverage=1 00:05:49.660 --rc genhtml_function_coverage=1 00:05:49.660 --rc genhtml_legend=1 00:05:49.660 --rc geninfo_all_blocks=1 00:05:49.660 --rc geninfo_unexecuted_blocks=1 00:05:49.660 00:05:49.660 ' 00:05:49.660 21:35:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:49.660 21:35:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:49.660 21:35:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:49.660 21:35:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:49.660 21:35:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.660 21:35:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.660 21:35:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.660 ************************************ 00:05:49.660 START TEST default_locks 00:05:49.660 ************************************ 00:05:49.660 21:35:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:49.660 21:35:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2854270 00:05:49.660 21:35:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2854270 00:05:49.660 21:35:21 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2854270 ']' 00:05:49.660 21:35:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.660 21:35:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.660 21:35:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.660 21:35:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.660 21:35:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.660 21:35:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.660 [2024-11-29 21:35:21.742308] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:49.660 [2024-11-29 21:35:21.742352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854270 ] 00:05:49.660 [2024-11-29 21:35:21.813046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.660 [2024-11-29 21:35:21.852215] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.919 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.919 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:49.919 21:35:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2854270 00:05:49.919 21:35:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2854270 00:05:49.919 21:35:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.178 lslocks: write error 00:05:50.178 21:35:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2854270 00:05:50.178 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2854270 ']' 00:05:50.178 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2854270 00:05:50.178 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:50.178 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.437 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2854270 00:05:50.437 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.437 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.437 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2854270' 00:05:50.437 killing process with pid 2854270 00:05:50.437 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2854270 00:05:50.437 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2854270 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2854270 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2854270 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2854270 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2854270 ']' 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.696 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2854270) - No such process 00:05:50.696 ERROR: process (pid: 2854270) is no longer running 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.696 00:05:50.696 real 0m1.105s 00:05:50.696 user 0m1.041s 00:05:50.696 sys 0m0.534s 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.696 21:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.696 ************************************ 00:05:50.696 END TEST default_locks 00:05:50.696 ************************************ 00:05:50.696 21:35:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:50.696 21:35:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.696 21:35:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.696 21:35:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.696 ************************************ 00:05:50.696 START TEST default_locks_via_rpc 00:05:50.696 ************************************ 00:05:50.696 21:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:50.696 21:35:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2854560 00:05:50.696 21:35:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2854560 00:05:50.696 21:35:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.696 21:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2854560 ']' 00:05:50.696 21:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.696 21:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.696 21:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.696 21:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.696 21:35:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.696 [2024-11-29 21:35:22.937516] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:50.696 [2024-11-29 21:35:22.937561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854560 ] 00:05:50.955 [2024-11-29 21:35:23.008111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.955 [2024-11-29 21:35:23.047337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.214 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.214 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2854560 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2854560 00:05:51.215 21:35:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.783 21:35:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2854560 00:05:51.783 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2854560 ']' 00:05:51.783 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2854560 00:05:51.783 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:51.783 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.783 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2854560 00:05:51.783 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.783 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.783 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2854560' 00:05:51.783 killing process with pid 2854560 00:05:51.783 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2854560 00:05:51.783 21:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2854560 00:05:52.042 00:05:52.042 real 0m1.347s 00:05:52.042 user 0m1.294s 00:05:52.042 sys 0m0.633s 00:05:52.042 21:35:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.042 21:35:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.042 ************************************ 00:05:52.042 END TEST default_locks_via_rpc 00:05:52.042 ************************************ 00:05:52.042 21:35:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:52.042 21:35:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.042 21:35:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.042 21:35:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.301 ************************************ 00:05:52.301 START TEST non_locking_app_on_locked_coremask 00:05:52.301 ************************************ 00:05:52.301 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:52.301 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2854857 00:05:52.301 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2854857 /var/tmp/spdk.sock 00:05:52.301 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.301 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2854857 ']' 00:05:52.301 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.301 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.301 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.301 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.301 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.301 [2024-11-29 21:35:24.362077] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:52.301 [2024-11-29 21:35:24.362126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854857 ] 00:05:52.301 [2024-11-29 21:35:24.432582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.301 [2024-11-29 21:35:24.471939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.560 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.560 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:52.560 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2854860 00:05:52.560 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2854860 /var/tmp/spdk2.sock 00:05:52.560 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:52.560 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2854860 ']' 00:05:52.560 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.560 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.560 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.560 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.560 21:35:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.560 [2024-11-29 21:35:24.721523] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:52.561 [2024-11-29 21:35:24.721582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854860 ] 00:05:52.819 [2024-11-29 21:35:24.823295] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.819 [2024-11-29 21:35:24.823328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.819 [2024-11-29 21:35:24.903516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.387 21:35:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.387 21:35:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:53.387 21:35:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2854857 00:05:53.387 21:35:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2854857 00:05:53.387 21:35:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.326 lslocks: write error 00:05:54.326 21:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2854857 00:05:54.326 21:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2854857 ']' 00:05:54.326 21:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2854857 00:05:54.326 21:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:54.326 21:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.326 21:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2854857 00:05:54.326 21:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.326 21:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.326 21:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2854857' 00:05:54.326 killing process with pid 2854857 00:05:54.326 21:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2854857 00:05:54.326 21:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2854857 00:05:54.895 21:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2854860 00:05:54.895 21:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2854860 ']' 00:05:54.895 21:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2854860 00:05:54.895 21:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:54.895 21:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.895 21:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2854860 00:05:55.155 21:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.155 21:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.155 21:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2854860' 00:05:55.155 killing process with pid 2854860 00:05:55.155 21:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2854860 00:05:55.155 21:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2854860 00:05:55.415 00:05:55.415 real 0m3.165s 00:05:55.415 user 0m3.319s 00:05:55.415 sys 0m1.210s 00:05:55.415 21:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.415 21:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.415 ************************************ 00:05:55.415 END TEST non_locking_app_on_locked_coremask 00:05:55.415 ************************************ 00:05:55.415 21:35:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:55.415 21:35:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.415 21:35:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.415 21:35:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.415 ************************************ 00:05:55.415 START TEST locking_app_on_unlocked_coremask 00:05:55.415 ************************************ 00:05:55.415 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:55.415 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2855427 00:05:55.415 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2855427 /var/tmp/spdk.sock 00:05:55.415 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:55.415 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2855427 ']' 00:05:55.415 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.415 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.415 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.415 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.415 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.415 [2024-11-29 21:35:27.614474] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:55.415 [2024-11-29 21:35:27.614530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855427 ] 00:05:55.675 [2024-11-29 21:35:27.684615] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.675 [2024-11-29 21:35:27.684641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.675 [2024-11-29 21:35:27.722905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.675 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.675 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:55.675 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2855435 00:05:55.675 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2855435 /var/tmp/spdk2.sock 00:05:55.675 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.675 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2855435 ']' 00:05:55.675 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.675 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.675 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.675 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.675 21:35:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.934 [2024-11-29 21:35:27.968898] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:55.934 [2024-11-29 21:35:27.968946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855435 ] 00:05:55.934 [2024-11-29 21:35:28.065673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.934 [2024-11-29 21:35:28.140168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.873 21:35:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.873 21:35:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:56.873 21:35:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2855435 00:05:56.873 21:35:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2855435 00:05:56.873 21:35:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.828 lslocks: write error 00:05:57.828 21:35:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2855427 00:05:57.828 21:35:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2855427 ']' 00:05:57.828 21:35:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2855427 00:05:57.828 21:35:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:57.828 21:35:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.828 21:35:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2855427 00:05:57.828 21:35:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.828 21:35:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.828 21:35:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2855427' 00:05:57.828 killing process with pid 2855427 00:05:57.828 21:35:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2855427 00:05:57.828 21:35:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2855427 00:05:58.401 21:35:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2855435 00:05:58.401 21:35:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2855435 ']' 00:05:58.401 21:35:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2855435 00:05:58.401 21:35:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:58.401 21:35:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.401 21:35:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2855435 00:05:58.401 21:35:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.401 21:35:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.401 21:35:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2855435' 00:05:58.401 killing process with pid 2855435 00:05:58.401 21:35:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2855435 00:05:58.401 21:35:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2855435 00:05:58.748 00:05:58.748 real 0m3.374s 00:05:58.748 user 0m3.553s 00:05:58.748 sys 0m1.307s 00:05:58.748 21:35:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.748 21:35:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.748 ************************************ 00:05:58.748 END TEST locking_app_on_unlocked_coremask 00:05:58.748 ************************************ 00:05:59.008 21:35:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:59.008 21:35:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.008 21:35:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.008 21:35:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.008 ************************************ 00:05:59.008 START TEST locking_app_on_locked_coremask 00:05:59.008 ************************************ 00:05:59.008 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:59.009 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2856004 00:05:59.009 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2856004 /var/tmp/spdk.sock 00:05:59.009 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.009 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2856004 ']' 00:05:59.009 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.009 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.009 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.009 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.009 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.009 [2024-11-29 21:35:31.068087] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:59.009 [2024-11-29 21:35:31.068136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856004 ] 00:05:59.009 [2024-11-29 21:35:31.135101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.009 [2024-11-29 21:35:31.170202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2856021 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2856021 /var/tmp/spdk2.sock 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2856021 /var/tmp/spdk2.sock 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2856021 /var/tmp/spdk2.sock 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2856021 ']' 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.268 21:35:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.268 [2024-11-29 21:35:31.413983] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:59.268 [2024-11-29 21:35:31.414037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856021 ] 00:05:59.268 [2024-11-29 21:35:31.512456] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2856004 has claimed it. 00:05:59.268 [2024-11-29 21:35:31.512495] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.837 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2856021) - No such process 00:05:59.837 ERROR: process (pid: 2856021) is no longer running 00:05:59.837 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.837 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:59.837 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:59.837 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.837 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.837 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.837 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2856004 00:05:59.837 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2856004 00:05:59.837 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.773 lslocks: write error 00:06:00.773 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2856004 00:06:00.773 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2856004 ']' 00:06:00.773 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2856004 00:06:00.773 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.773 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.773 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2856004 00:06:00.773 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.773 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.773 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2856004' 00:06:00.773 killing process with pid 2856004 00:06:00.773 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2856004 00:06:00.773 21:35:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2856004 00:06:01.033 00:06:01.033 real 0m2.064s 00:06:01.033 user 0m2.180s 00:06:01.033 sys 0m0.761s 00:06:01.033 21:35:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.033 21:35:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.033 ************************************ 00:06:01.033 END TEST locking_app_on_locked_coremask 00:06:01.033 ************************************ 00:06:01.033 21:35:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:01.033 21:35:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.033 21:35:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.033 21:35:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.033 ************************************ 00:06:01.033 START TEST locking_overlapped_coremask 00:06:01.033 ************************************ 00:06:01.033 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:01.033 21:35:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2856429 00:06:01.033 21:35:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:01.033 21:35:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2856429 /var/tmp/spdk.sock 00:06:01.033 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2856429 ']' 00:06:01.033 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.033 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.033 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.033 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.033 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.033 [2024-11-29 21:35:33.214488] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:01.033 [2024-11-29 21:35:33.214538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856429 ] 00:06:01.292 [2024-11-29 21:35:33.285792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.292 [2024-11-29 21:35:33.327095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.292 [2024-11-29 21:35:33.327192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.292 [2024-11-29 21:35:33.327194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2856570 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2856570 /var/tmp/spdk2.sock 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2856570 /var/tmp/spdk2.sock 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2856570 /var/tmp/spdk2.sock 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2856570 ']' 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.292 21:35:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.552 [2024-11-29 21:35:33.575790] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:01.552 [2024-11-29 21:35:33.575841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856570 ] 00:06:01.552 [2024-11-29 21:35:33.674230] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2856429 has claimed it. 00:06:01.552 [2024-11-29 21:35:33.674269] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:02.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2856570) - No such process 00:06:02.120 ERROR: process (pid: 2856570) is no longer running 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2856429 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2856429 ']' 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2856429 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2856429 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2856429' 00:06:02.120 killing process with pid 2856429 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2856429 00:06:02.120 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2856429 00:06:02.379 00:06:02.379 real 0m1.431s 00:06:02.379 user 0m3.902s 00:06:02.379 sys 0m0.436s 00:06:02.379 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.379 21:35:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.379 ************************************ 00:06:02.379 END TEST locking_overlapped_coremask 00:06:02.379 ************************************ 00:06:02.639 21:35:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:02.639 21:35:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.639 21:35:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.639 21:35:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.639 ************************************ 00:06:02.639 START TEST locking_overlapped_coremask_via_rpc 00:06:02.639 ************************************ 00:06:02.639 21:35:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:02.639 21:35:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2856711 00:06:02.639 21:35:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2856711 /var/tmp/spdk.sock 00:06:02.639 21:35:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:02.639 21:35:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2856711 ']' 00:06:02.639 21:35:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.639 21:35:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.639 21:35:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.639 21:35:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.639 21:35:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.639 [2024-11-29 21:35:34.736159] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:02.639 [2024-11-29 21:35:34.736208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856711 ] 00:06:02.639 [2024-11-29 21:35:34.806990] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.639 [2024-11-29 21:35:34.807015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.640 [2024-11-29 21:35:34.848445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.640 [2024-11-29 21:35:34.848539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.640 [2024-11-29 21:35:34.848542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.900 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.900 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:02.900 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2856875 00:06:02.900 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2856875 /var/tmp/spdk2.sock 00:06:02.900 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:02.900 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2856875 ']' 00:06:02.900 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.900 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.900 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.900 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.900 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.900 [2024-11-29 21:35:35.106472] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:02.900 [2024-11-29 21:35:35.106531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856875 ] 00:06:03.159 [2024-11-29 21:35:35.207462] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.159 [2024-11-29 21:35:35.207491] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.159 [2024-11-29 21:35:35.289523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.159 [2024-11-29 21:35:35.292713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.159 [2024-11-29 21:35:35.292714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.726 [2024-11-29 21:35:35.957739] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2856711 has claimed it. 00:06:03.726 request: 00:06:03.726 { 00:06:03.726 "method": "framework_enable_cpumask_locks", 00:06:03.726 "req_id": 1 00:06:03.726 } 00:06:03.726 Got JSON-RPC error response 00:06:03.726 response: 00:06:03.726 { 00:06:03.726 "code": -32603, 00:06:03.726 "message": "Failed to claim CPU core: 2" 00:06:03.726 } 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2856711 /var/tmp/spdk.sock 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2856711 ']' 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.726 21:35:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.985 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.985 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:03.985 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2856875 /var/tmp/spdk2.sock 00:06:03.985 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2856875 ']' 00:06:03.985 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.985 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.985 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.985 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.985 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.244 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.244 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:04.244 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:04.244 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.244 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.244 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.244 00:06:04.244 real 0m1.709s 00:06:04.244 user 0m0.805s 00:06:04.244 sys 0m0.172s 00:06:04.244 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.244 21:35:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.244 ************************************ 00:06:04.244 END TEST locking_overlapped_coremask_via_rpc 00:06:04.244 ************************************ 00:06:04.244 21:35:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:04.244 21:35:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2856711 ]] 00:06:04.244 21:35:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2856711 00:06:04.244 21:35:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2856711 ']' 00:06:04.244 21:35:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2856711 00:06:04.244 21:35:36 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:04.244 21:35:36 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.244 21:35:36 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2856711 00:06:04.502 21:35:36 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.502 21:35:36 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.502 21:35:36 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2856711' 00:06:04.502 killing process with pid 2856711 00:06:04.502 21:35:36 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2856711 00:06:04.502 21:35:36 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2856711 00:06:04.760 21:35:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2856875 ]] 00:06:04.760 21:35:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2856875 00:06:04.760 21:35:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2856875 ']' 00:06:04.760 21:35:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2856875 00:06:04.760 21:35:36 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:04.760 21:35:36 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.760 21:35:36 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2856875 00:06:04.760 21:35:36 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:04.760 21:35:36 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:04.760 21:35:36 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2856875' 00:06:04.760 killing process with pid 2856875 00:06:04.760 21:35:36 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2856875 00:06:04.760 21:35:36 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2856875 00:06:05.018 21:35:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.018 21:35:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:05.018 21:35:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2856711 ]] 00:06:05.018 21:35:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2856711 00:06:05.018 21:35:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2856711 ']' 00:06:05.018 21:35:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2856711 00:06:05.018 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2856711) - No such process 00:06:05.018 21:35:37 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2856711 is not found' 00:06:05.018 Process with pid 2856711 is not found 00:06:05.018 21:35:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2856875 ]] 00:06:05.018 21:35:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2856875 00:06:05.018 21:35:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2856875 ']' 00:06:05.018 21:35:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2856875 00:06:05.018 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2856875) - No such process 00:06:05.018 21:35:37 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2856875 is not found' 00:06:05.018 Process with pid 2856875 is not found 00:06:05.018 21:35:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.018 00:06:05.018 real 0m15.729s 00:06:05.018 user 0m26.057s 00:06:05.018 sys 0m6.150s 00:06:05.018 21:35:37 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.018 21:35:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.018 ************************************ 00:06:05.018 END TEST cpu_locks 00:06:05.018 ************************************ 00:06:05.018 00:06:05.018 real 0m41.269s 00:06:05.018 user 1m17.176s 00:06:05.018 sys 0m10.339s 00:06:05.018 21:35:37 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.018 21:35:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.018 ************************************ 00:06:05.018 END TEST event 00:06:05.018 ************************************ 00:06:05.277 21:35:37 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:05.277 21:35:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.277 21:35:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.277 21:35:37 -- common/autotest_common.sh@10 -- # set +x 00:06:05.277 ************************************ 00:06:05.277 START TEST thread 00:06:05.277 ************************************ 00:06:05.277 21:35:37 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:05.277 * Looking for test storage... 00:06:05.277 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:05.277 21:35:37 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.277 21:35:37 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.277 21:35:37 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.277 21:35:37 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.277 21:35:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.277 21:35:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.277 21:35:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.277 21:35:37 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.277 21:35:37 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.277 21:35:37 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.277 21:35:37 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.277 21:35:37 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.277 21:35:37 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.277 21:35:37 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.277 21:35:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.277 21:35:37 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:05.277 21:35:37 thread -- scripts/common.sh@345 -- # : 1 00:06:05.277 21:35:37 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.277 21:35:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.277 21:35:37 thread -- scripts/common.sh@365 -- # decimal 1 00:06:05.536 21:35:37 thread -- scripts/common.sh@353 -- # local d=1 00:06:05.536 21:35:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.536 21:35:37 thread -- scripts/common.sh@355 -- # echo 1 00:06:05.536 21:35:37 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.536 21:35:37 thread -- scripts/common.sh@366 -- # decimal 2 00:06:05.536 21:35:37 thread -- scripts/common.sh@353 -- # local d=2 00:06:05.536 21:35:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.536 21:35:37 thread -- scripts/common.sh@355 -- # echo 2 00:06:05.536 21:35:37 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.536 21:35:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.536 21:35:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.536 21:35:37 thread -- scripts/common.sh@368 -- # return 0 00:06:05.536 21:35:37 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.536 21:35:37 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:05.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.536 --rc genhtml_branch_coverage=1 00:06:05.536 --rc genhtml_function_coverage=1 00:06:05.536 --rc genhtml_legend=1 00:06:05.536 --rc geninfo_all_blocks=1 00:06:05.536 --rc geninfo_unexecuted_blocks=1 00:06:05.536 00:06:05.536 ' 00:06:05.536 21:35:37 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:05.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.536 --rc genhtml_branch_coverage=1 00:06:05.536 --rc genhtml_function_coverage=1 00:06:05.536 --rc genhtml_legend=1 00:06:05.536 --rc geninfo_all_blocks=1 00:06:05.536 --rc geninfo_unexecuted_blocks=1 00:06:05.536 00:06:05.536 ' 00:06:05.536 21:35:37 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:05.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.536 --rc genhtml_branch_coverage=1 00:06:05.536 --rc genhtml_function_coverage=1 00:06:05.536 --rc genhtml_legend=1 00:06:05.536 --rc geninfo_all_blocks=1 00:06:05.536 --rc geninfo_unexecuted_blocks=1 00:06:05.536 00:06:05.536 ' 00:06:05.536 21:35:37 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:05.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.536 --rc genhtml_branch_coverage=1 00:06:05.536 --rc genhtml_function_coverage=1 00:06:05.536 --rc genhtml_legend=1 00:06:05.536 --rc geninfo_all_blocks=1 00:06:05.536 --rc geninfo_unexecuted_blocks=1 00:06:05.536 00:06:05.536 ' 00:06:05.536 21:35:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.536 21:35:37 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:05.536 21:35:37 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.536 21:35:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.536 ************************************ 00:06:05.536 START TEST thread_poller_perf 00:06:05.536 ************************************ 00:06:05.536 21:35:37 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.536 [2024-11-29 21:35:37.599323] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:05.536 [2024-11-29 21:35:37.599415] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857273 ] 00:06:05.536 [2024-11-29 21:35:37.674169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.536 [2024-11-29 21:35:37.712421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.536 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:06.914 [2024-11-29T20:35:39.162Z] ====================================== 00:06:06.914 [2024-11-29T20:35:39.162Z] busy:2506190666 (cyc) 00:06:06.914 [2024-11-29T20:35:39.162Z] total_run_count: 429000 00:06:06.914 [2024-11-29T20:35:39.162Z] tsc_hz: 2500000000 (cyc) 00:06:06.914 [2024-11-29T20:35:39.162Z] ====================================== 00:06:06.914 [2024-11-29T20:35:39.162Z] poller_cost: 5841 (cyc), 2336 (nsec) 00:06:06.914 00:06:06.914 real 0m1.201s 00:06:06.914 user 0m1.108s 00:06:06.914 sys 0m0.089s 00:06:06.914 21:35:38 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.914 21:35:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.914 ************************************ 00:06:06.914 END TEST thread_poller_perf 00:06:06.914 ************************************ 00:06:06.914 21:35:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.914 21:35:38 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:06.914 21:35:38 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.914 21:35:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.914 ************************************ 00:06:06.914 START TEST thread_poller_perf 00:06:06.914 ************************************ 00:06:06.914 21:35:38 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.914 [2024-11-29 21:35:38.885881] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:06.914 [2024-11-29 21:35:38.885962] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857549 ] 00:06:06.914 [2024-11-29 21:35:38.958125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.914 [2024-11-29 21:35:38.995504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.914 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:07.849 [2024-11-29T20:35:40.097Z] ====================================== 00:06:07.849 [2024-11-29T20:35:40.097Z] busy:2502002236 (cyc) 00:06:07.849 [2024-11-29T20:35:40.097Z] total_run_count: 5610000 00:06:07.849 [2024-11-29T20:35:40.097Z] tsc_hz: 2500000000 (cyc) 00:06:07.849 [2024-11-29T20:35:40.097Z] ====================================== 00:06:07.849 [2024-11-29T20:35:40.097Z] poller_cost: 445 (cyc), 178 (nsec) 00:06:07.849 00:06:07.849 real 0m1.196s 00:06:07.849 user 0m1.097s 00:06:07.849 sys 0m0.095s 00:06:07.849 21:35:40 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.849 21:35:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.849 ************************************ 00:06:07.849 END TEST thread_poller_perf 00:06:07.849 ************************************ 00:06:08.108 21:35:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:08.108 00:06:08.108 real 0m2.762s 00:06:08.108 user 0m2.380s 00:06:08.108 sys 0m0.397s 00:06:08.108 21:35:40 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.108 21:35:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.108 ************************************ 00:06:08.108 END TEST thread 00:06:08.108 ************************************ 00:06:08.108 21:35:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:08.108 21:35:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:08.108 21:35:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.108 21:35:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.108 21:35:40 -- common/autotest_common.sh@10 -- # set +x 00:06:08.108 ************************************ 00:06:08.108 START TEST app_cmdline 00:06:08.108 ************************************ 00:06:08.108 21:35:40 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:08.108 * Looking for test storage... 00:06:08.108 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:08.108 21:35:40 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:08.108 21:35:40 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:08.108 21:35:40 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:08.368 21:35:40 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.368 21:35:40 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:08.368 21:35:40 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.368 21:35:40 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:08.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.368 --rc genhtml_branch_coverage=1 00:06:08.368 --rc genhtml_function_coverage=1 00:06:08.368 --rc genhtml_legend=1 00:06:08.368 --rc geninfo_all_blocks=1 00:06:08.368 --rc geninfo_unexecuted_blocks=1 00:06:08.368 00:06:08.368 ' 00:06:08.368 21:35:40 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:08.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.368 --rc genhtml_branch_coverage=1 00:06:08.368 --rc genhtml_function_coverage=1 00:06:08.368 --rc genhtml_legend=1 00:06:08.368 --rc geninfo_all_blocks=1 00:06:08.368 --rc geninfo_unexecuted_blocks=1 00:06:08.368 00:06:08.368 ' 00:06:08.368 21:35:40 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:08.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.368 --rc genhtml_branch_coverage=1 00:06:08.368 --rc genhtml_function_coverage=1 00:06:08.368 --rc genhtml_legend=1 00:06:08.368 --rc geninfo_all_blocks=1 00:06:08.368 --rc geninfo_unexecuted_blocks=1 00:06:08.368 00:06:08.368 ' 00:06:08.368 21:35:40 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:08.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.368 --rc genhtml_branch_coverage=1 00:06:08.368 --rc genhtml_function_coverage=1 00:06:08.368 --rc genhtml_legend=1 00:06:08.368 --rc geninfo_all_blocks=1 00:06:08.368 --rc geninfo_unexecuted_blocks=1 00:06:08.368 00:06:08.368 ' 00:06:08.368 21:35:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:08.368 21:35:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:08.368 21:35:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2857876 00:06:08.368 21:35:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2857876 00:06:08.368 21:35:40 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2857876 ']' 00:06:08.368 21:35:40 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.368 21:35:40 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.368 21:35:40 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.368 21:35:40 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.368 21:35:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.368 [2024-11-29 21:35:40.418899] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:08.368 [2024-11-29 21:35:40.418952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857876 ] 00:06:08.368 [2024-11-29 21:35:40.487859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.368 [2024-11-29 21:35:40.527908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.627 21:35:40 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.627 21:35:40 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:08.627 21:35:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:08.886 { 00:06:08.886 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:08.886 "fields": { 00:06:08.886 "major": 24, 00:06:08.886 "minor": 9, 00:06:08.886 "patch": 1, 00:06:08.886 "suffix": "-pre", 00:06:08.886 "commit": "b18e1bd62" 00:06:08.886 } 00:06:08.886 } 00:06:08.886 21:35:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:08.886 21:35:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:08.886 21:35:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:08.886 21:35:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:08.886 21:35:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:08.886 21:35:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.886 21:35:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.886 21:35:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:08.886 21:35:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:08.886 21:35:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:06:08.886 21:35:40 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.145 request: 00:06:09.145 { 00:06:09.145 "method": "env_dpdk_get_mem_stats", 00:06:09.145 "req_id": 1 00:06:09.145 } 00:06:09.145 Got JSON-RPC error response 00:06:09.145 response: 00:06:09.145 { 00:06:09.145 "code": -32601, 00:06:09.145 "message": "Method not found" 00:06:09.145 } 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.145 21:35:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2857876 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2857876 ']' 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2857876 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2857876 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2857876' 00:06:09.145 killing process with pid 2857876 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@969 -- # kill 2857876 00:06:09.145 21:35:41 app_cmdline -- common/autotest_common.sh@974 -- # wait 2857876 00:06:09.405 00:06:09.405 real 0m1.339s 00:06:09.405 user 0m1.508s 00:06:09.405 sys 0m0.476s 00:06:09.405 21:35:41 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.405 21:35:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.405 ************************************ 00:06:09.405 END TEST app_cmdline 00:06:09.405 ************************************ 00:06:09.405 21:35:41 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:09.405 21:35:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.405 21:35:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.405 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:06:09.405 ************************************ 00:06:09.405 START TEST version 00:06:09.405 ************************************ 00:06:09.405 21:35:41 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:09.665 * Looking for test storage... 00:06:09.665 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:09.665 21:35:41 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.665 21:35:41 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.665 21:35:41 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.665 21:35:41 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.665 21:35:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.665 21:35:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.665 21:35:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.665 21:35:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.665 21:35:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.665 21:35:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.665 21:35:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.665 21:35:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.665 21:35:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.665 21:35:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.665 21:35:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.665 21:35:41 version -- scripts/common.sh@344 -- # case "$op" in 00:06:09.665 21:35:41 version -- scripts/common.sh@345 -- # : 1 00:06:09.665 21:35:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.665 21:35:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.665 21:35:41 version -- scripts/common.sh@365 -- # decimal 1 00:06:09.665 21:35:41 version -- scripts/common.sh@353 -- # local d=1 00:06:09.665 21:35:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.665 21:35:41 version -- scripts/common.sh@355 -- # echo 1 00:06:09.665 21:35:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.665 21:35:41 version -- scripts/common.sh@366 -- # decimal 2 00:06:09.665 21:35:41 version -- scripts/common.sh@353 -- # local d=2 00:06:09.665 21:35:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.665 21:35:41 version -- scripts/common.sh@355 -- # echo 2 00:06:09.665 21:35:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.665 21:35:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.665 21:35:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.665 21:35:41 version -- scripts/common.sh@368 -- # return 0 00:06:09.665 21:35:41 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.665 21:35:41 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.665 --rc genhtml_branch_coverage=1 00:06:09.665 --rc genhtml_function_coverage=1 00:06:09.665 --rc genhtml_legend=1 00:06:09.665 --rc geninfo_all_blocks=1 00:06:09.665 --rc geninfo_unexecuted_blocks=1 00:06:09.665 00:06:09.665 ' 00:06:09.665 21:35:41 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.665 --rc genhtml_branch_coverage=1 00:06:09.665 --rc genhtml_function_coverage=1 00:06:09.665 --rc genhtml_legend=1 00:06:09.665 --rc geninfo_all_blocks=1 00:06:09.665 --rc geninfo_unexecuted_blocks=1 00:06:09.665 00:06:09.665 ' 00:06:09.665 21:35:41 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.665 --rc genhtml_branch_coverage=1 00:06:09.665 --rc genhtml_function_coverage=1 00:06:09.665 --rc genhtml_legend=1 00:06:09.665 --rc geninfo_all_blocks=1 00:06:09.665 --rc geninfo_unexecuted_blocks=1 00:06:09.665 00:06:09.665 ' 00:06:09.665 21:35:41 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.665 --rc genhtml_branch_coverage=1 00:06:09.665 --rc genhtml_function_coverage=1 00:06:09.665 --rc genhtml_legend=1 00:06:09.665 --rc geninfo_all_blocks=1 00:06:09.665 --rc geninfo_unexecuted_blocks=1 00:06:09.665 00:06:09.665 ' 00:06:09.665 21:35:41 version -- app/version.sh@17 -- # get_header_version major 00:06:09.665 21:35:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:09.666 21:35:41 version -- app/version.sh@14 -- # cut -f2 00:06:09.666 21:35:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.666 21:35:41 version -- app/version.sh@17 -- # major=24 00:06:09.666 21:35:41 version -- app/version.sh@18 -- # get_header_version minor 00:06:09.666 21:35:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:09.666 21:35:41 version -- app/version.sh@14 -- # cut -f2 00:06:09.666 21:35:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.666 21:35:41 version -- app/version.sh@18 -- # minor=9 00:06:09.666 21:35:41 version -- app/version.sh@19 -- # get_header_version patch 00:06:09.666 21:35:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:09.666 21:35:41 version -- app/version.sh@14 -- # cut -f2 00:06:09.666 21:35:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.666 21:35:41 version -- app/version.sh@19 -- # patch=1 00:06:09.666 21:35:41 version -- app/version.sh@20 -- # get_header_version suffix 00:06:09.666 21:35:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:09.666 21:35:41 version -- app/version.sh@14 -- # cut -f2 00:06:09.666 21:35:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.666 21:35:41 version -- app/version.sh@20 -- # suffix=-pre 00:06:09.666 21:35:41 version -- app/version.sh@22 -- # version=24.9 00:06:09.666 21:35:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:09.666 21:35:41 version -- app/version.sh@25 -- # version=24.9.1 00:06:09.666 21:35:41 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:09.666 21:35:41 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:06:09.666 21:35:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:09.666 21:35:41 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:09.666 21:35:41 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:09.666 00:06:09.666 real 0m0.273s 00:06:09.666 user 0m0.152s 00:06:09.666 sys 0m0.176s 00:06:09.666 21:35:41 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.666 21:35:41 version -- common/autotest_common.sh@10 -- # set +x 00:06:09.666 ************************************ 00:06:09.666 END TEST version 00:06:09.666 ************************************ 00:06:09.925 21:35:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:09.925 21:35:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:09.925 21:35:41 -- spdk/autotest.sh@194 -- # uname -s 00:06:09.925 21:35:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:09.925 21:35:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:09.925 21:35:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:09.925 21:35:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:09.925 21:35:41 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:09.925 21:35:41 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:09.925 21:35:41 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.925 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:06:09.925 21:35:41 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:09.925 21:35:41 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:09.925 21:35:41 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:09.925 21:35:41 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:09.925 21:35:41 -- spdk/autotest.sh@276 -- # '[' rdma = rdma ']' 00:06:09.925 21:35:41 -- spdk/autotest.sh@277 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:09.925 21:35:41 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:09.925 21:35:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.925 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:06:09.925 ************************************ 00:06:09.925 START TEST nvmf_rdma 00:06:09.925 ************************************ 00:06:09.925 21:35:42 nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:09.925 * Looking for test storage... 00:06:09.925 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:09.925 21:35:42 nvmf_rdma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.925 21:35:42 nvmf_rdma -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.925 21:35:42 nvmf_rdma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.184 21:35:42 nvmf_rdma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:06:10.184 21:35:42 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.185 21:35:42 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.185 21:35:42 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.185 21:35:42 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:06:10.185 21:35:42 nvmf_rdma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.185 21:35:42 nvmf_rdma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.185 --rc genhtml_branch_coverage=1 00:06:10.185 --rc genhtml_function_coverage=1 00:06:10.185 --rc genhtml_legend=1 00:06:10.185 --rc geninfo_all_blocks=1 00:06:10.185 --rc geninfo_unexecuted_blocks=1 00:06:10.185 00:06:10.185 ' 00:06:10.185 21:35:42 nvmf_rdma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.185 --rc genhtml_branch_coverage=1 00:06:10.185 --rc genhtml_function_coverage=1 00:06:10.185 --rc genhtml_legend=1 00:06:10.185 --rc geninfo_all_blocks=1 00:06:10.185 --rc geninfo_unexecuted_blocks=1 00:06:10.185 00:06:10.185 ' 00:06:10.185 21:35:42 nvmf_rdma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.185 --rc genhtml_branch_coverage=1 00:06:10.185 --rc genhtml_function_coverage=1 00:06:10.185 --rc genhtml_legend=1 00:06:10.185 --rc geninfo_all_blocks=1 00:06:10.185 --rc geninfo_unexecuted_blocks=1 00:06:10.185 00:06:10.185 ' 00:06:10.185 21:35:42 nvmf_rdma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.185 --rc genhtml_branch_coverage=1 00:06:10.185 --rc genhtml_function_coverage=1 00:06:10.185 --rc genhtml_legend=1 00:06:10.185 --rc geninfo_all_blocks=1 00:06:10.185 --rc geninfo_unexecuted_blocks=1 00:06:10.185 00:06:10.185 ' 00:06:10.185 21:35:42 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:06:10.185 21:35:42 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:10.185 21:35:42 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:10.185 21:35:42 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:10.185 21:35:42 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.185 21:35:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:10.185 ************************************ 00:06:10.185 START TEST nvmf_target_core 00:06:10.185 ************************************ 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:10.185 * Looking for test storage... 00:06:10.185 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.185 --rc genhtml_branch_coverage=1 00:06:10.185 --rc genhtml_function_coverage=1 00:06:10.185 --rc genhtml_legend=1 00:06:10.185 --rc geninfo_all_blocks=1 00:06:10.185 --rc geninfo_unexecuted_blocks=1 00:06:10.185 00:06:10.185 ' 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.185 --rc genhtml_branch_coverage=1 00:06:10.185 --rc genhtml_function_coverage=1 00:06:10.185 --rc genhtml_legend=1 00:06:10.185 --rc geninfo_all_blocks=1 00:06:10.185 --rc geninfo_unexecuted_blocks=1 00:06:10.185 00:06:10.185 ' 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.185 --rc genhtml_branch_coverage=1 00:06:10.185 --rc genhtml_function_coverage=1 00:06:10.185 --rc genhtml_legend=1 00:06:10.185 --rc geninfo_all_blocks=1 00:06:10.185 --rc geninfo_unexecuted_blocks=1 00:06:10.185 00:06:10.185 ' 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.185 --rc genhtml_branch_coverage=1 00:06:10.185 --rc genhtml_function_coverage=1 00:06:10.185 --rc genhtml_legend=1 00:06:10.185 --rc geninfo_all_blocks=1 00:06:10.185 --rc geninfo_unexecuted_blocks=1 00:06:10.185 00:06:10.185 ' 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:10.185 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.444 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.445 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.445 ************************************ 00:06:10.445 START TEST nvmf_abort 00:06:10.445 ************************************ 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:10.445 * Looking for test storage... 00:06:10.445 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.445 --rc genhtml_branch_coverage=1 00:06:10.445 --rc genhtml_function_coverage=1 00:06:10.445 --rc genhtml_legend=1 00:06:10.445 --rc geninfo_all_blocks=1 00:06:10.445 --rc geninfo_unexecuted_blocks=1 00:06:10.445 00:06:10.445 ' 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.445 --rc genhtml_branch_coverage=1 00:06:10.445 --rc genhtml_function_coverage=1 00:06:10.445 --rc genhtml_legend=1 00:06:10.445 --rc geninfo_all_blocks=1 00:06:10.445 --rc geninfo_unexecuted_blocks=1 00:06:10.445 00:06:10.445 ' 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.445 --rc genhtml_branch_coverage=1 00:06:10.445 --rc genhtml_function_coverage=1 00:06:10.445 --rc genhtml_legend=1 00:06:10.445 --rc geninfo_all_blocks=1 00:06:10.445 --rc geninfo_unexecuted_blocks=1 00:06:10.445 00:06:10.445 ' 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.445 --rc genhtml_branch_coverage=1 00:06:10.445 --rc genhtml_function_coverage=1 00:06:10.445 --rc genhtml_legend=1 00:06:10.445 --rc geninfo_all_blocks=1 00:06:10.445 --rc geninfo_unexecuted_blocks=1 00:06:10.445 00:06:10.445 ' 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.445 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.705 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:10.705 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:06:10.706 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.706 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:10.706 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:10.706 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:10.706 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.706 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.706 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.706 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:10.706 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:10.706 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:10.706 21:35:42 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.275 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:17.276 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:17.276 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:17.276 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:17.276 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # rdma_device_init 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@526 -- # allocate_nic_ips 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:17.276 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:17.276 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:17.276 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:17.276 altname enp217s0f0np0 00:06:17.276 altname ens818f0np0 00:06:17.276 inet 192.168.100.8/24 scope global mlx_0_0 00:06:17.276 valid_lft forever preferred_lft forever 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:17.277 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:17.277 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:17.277 altname enp217s0f1np1 00:06:17.277 altname ens818f1np1 00:06:17.277 inet 192.168.100.9/24 scope global mlx_0_1 00:06:17.277 valid_lft forever preferred_lft forever 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:06:17.277 192.168.100.9' 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # head -n 1 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:06:17.277 192.168.100.9' 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:06:17.277 192.168.100.9' 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # tail -n +2 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # head -n 1 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=2861746 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 2861746 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2861746 ']' 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.277 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:17.277 [2024-11-29 21:35:49.453503] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:17.277 [2024-11-29 21:35:49.453561] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.537 [2024-11-29 21:35:49.524977] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.537 [2024-11-29 21:35:49.565626] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.537 [2024-11-29 21:35:49.565681] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.537 [2024-11-29 21:35:49.565691] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.537 [2024-11-29 21:35:49.565699] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.537 [2024-11-29 21:35:49.565722] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.537 [2024-11-29 21:35:49.565823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.537 [2024-11-29 21:35:49.565918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.537 [2024-11-29 21:35:49.565920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.537 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.537 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:17.537 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:17.538 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.538 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.538 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.538 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:06:17.538 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.538 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.538 [2024-11-29 21:35:49.755129] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15c9710/0x15cdbc0) succeed. 00:06:17.538 [2024-11-29 21:35:49.773721] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15cac60/0x160f260) succeed. 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.798 Malloc0 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.798 Delay0 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.798 [2024-11-29 21:35:49.932767] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.798 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.799 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.799 21:35:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:17.799 [2024-11-29 21:35:50.038959] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:20.331 Initializing NVMe Controllers 00:06:20.331 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:06:20.331 controller IO queue size 128 less than required 00:06:20.331 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:20.331 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:20.331 Initialization complete. Launching workers. 00:06:20.331 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42868 00:06:20.331 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42929, failed to submit 62 00:06:20.331 success 42869, unsuccessful 60, failed 0 00:06:20.331 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:20.331 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:20.332 rmmod nvme_rdma 00:06:20.332 rmmod nvme_fabrics 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 2861746 ']' 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 2861746 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2861746 ']' 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2861746 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2861746 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2861746' 00:06:20.332 killing process with pid 2861746 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2861746 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2861746 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:06:20.332 00:06:20.332 real 0m10.033s 00:06:20.332 user 0m12.878s 00:06:20.332 sys 0m5.511s 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.332 ************************************ 00:06:20.332 END TEST nvmf_abort 00:06:20.332 ************************************ 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.332 21:35:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:20.591 ************************************ 00:06:20.591 START TEST nvmf_ns_hotplug_stress 00:06:20.592 ************************************ 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:06:20.592 * Looking for test storage... 00:06:20.592 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.592 --rc genhtml_branch_coverage=1 00:06:20.592 --rc genhtml_function_coverage=1 00:06:20.592 --rc genhtml_legend=1 00:06:20.592 --rc geninfo_all_blocks=1 00:06:20.592 --rc geninfo_unexecuted_blocks=1 00:06:20.592 00:06:20.592 ' 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.592 --rc genhtml_branch_coverage=1 00:06:20.592 --rc genhtml_function_coverage=1 00:06:20.592 --rc genhtml_legend=1 00:06:20.592 --rc geninfo_all_blocks=1 00:06:20.592 --rc geninfo_unexecuted_blocks=1 00:06:20.592 00:06:20.592 ' 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.592 --rc genhtml_branch_coverage=1 00:06:20.592 --rc genhtml_function_coverage=1 00:06:20.592 --rc genhtml_legend=1 00:06:20.592 --rc geninfo_all_blocks=1 00:06:20.592 --rc geninfo_unexecuted_blocks=1 00:06:20.592 00:06:20.592 ' 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.592 --rc genhtml_branch_coverage=1 00:06:20.592 --rc genhtml_function_coverage=1 00:06:20.592 --rc genhtml_legend=1 00:06:20.592 --rc geninfo_all_blocks=1 00:06:20.592 --rc geninfo_unexecuted_blocks=1 00:06:20.592 00:06:20.592 ' 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.592 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.852 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:20.852 21:35:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:27.420 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:06:27.420 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:27.421 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:27.421 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:27.421 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # rdma_device_init 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:27.421 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@526 -- # allocate_nic_ips 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:27.680 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:27.680 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:27.680 altname enp217s0f0np0 00:06:27.680 altname ens818f0np0 00:06:27.680 inet 192.168.100.8/24 scope global mlx_0_0 00:06:27.680 valid_lft forever preferred_lft forever 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:27.680 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:27.680 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:27.681 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:27.681 altname enp217s0f1np1 00:06:27.681 altname ens818f1np1 00:06:27.681 inet 192.168.100.9/24 scope global mlx_0_1 00:06:27.681 valid_lft forever preferred_lft forever 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:06:27.681 192.168.100.9' 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:06:27.681 192.168.100.9' 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # head -n 1 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:06:27.681 192.168.100.9' 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # tail -n +2 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # head -n 1 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=2865732 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 2865732 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2865732 ']' 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.681 21:35:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:27.940 [2024-11-29 21:35:59.934480] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:27.940 [2024-11-29 21:35:59.934539] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.940 [2024-11-29 21:36:00.005943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.940 [2024-11-29 21:36:00.050524] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.940 [2024-11-29 21:36:00.050564] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.940 [2024-11-29 21:36:00.050574] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.940 [2024-11-29 21:36:00.050582] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.940 [2024-11-29 21:36:00.050590] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.940 [2024-11-29 21:36:00.050696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.940 [2024-11-29 21:36:00.050780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.940 [2024-11-29 21:36:00.050782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.940 21:36:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.940 21:36:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:27.940 21:36:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:27.940 21:36:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.941 21:36:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:27.941 21:36:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:27.941 21:36:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:27.941 21:36:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:28.200 [2024-11-29 21:36:00.402010] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1778710/0x177cbc0) succeed. 00:06:28.200 [2024-11-29 21:36:00.414170] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1779c60/0x17be260) succeed. 00:06:28.459 21:36:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:28.718 21:36:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:28.718 [2024-11-29 21:36:00.918241] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:28.718 21:36:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:28.977 21:36:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:29.236 Malloc0 00:06:29.236 21:36:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:29.495 Delay0 00:06:29.495 21:36:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.753 21:36:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:29.753 NULL1 00:06:29.753 21:36:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:30.012 21:36:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:30.012 21:36:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2866162 00:06:30.012 21:36:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:30.012 21:36:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.387 Read completed with error (sct=0, sc=11) 00:06:31.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.387 21:36:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.387 21:36:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:31.387 21:36:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:31.645 true 00:06:31.645 21:36:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:31.645 21:36:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.581 21:36:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.581 21:36:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:32.581 21:36:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:32.840 true 00:06:32.840 21:36:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:32.840 21:36:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.775 21:36:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.775 21:36:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:33.775 21:36:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:34.035 true 00:06:34.035 21:36:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:34.035 21:36:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.972 21:36:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.972 21:36:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:34.972 21:36:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:35.231 true 00:06:35.231 21:36:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:35.231 21:36:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.170 21:36:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.170 21:36:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:36.170 21:36:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:36.429 true 00:06:36.429 21:36:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:36.429 21:36:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.365 21:36:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.365 21:36:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:37.365 21:36:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:37.624 true 00:06:37.624 21:36:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:37.624 21:36:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.912 21:36:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.912 21:36:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:37.912 21:36:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:38.171 true 00:06:38.171 21:36:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:38.171 21:36:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.366 21:36:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.366 21:36:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:39.366 21:36:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:39.682 true 00:06:39.682 21:36:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:39.682 21:36:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.572 21:36:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.572 21:36:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:40.572 21:36:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:40.830 true 00:06:40.830 21:36:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:40.830 21:36:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.766 21:36:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.766 21:36:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:41.766 21:36:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:42.024 true 00:06:42.025 21:36:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:42.025 21:36:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.960 21:36:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.960 21:36:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:42.960 21:36:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:43.219 true 00:06:43.219 21:36:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:43.219 21:36:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.155 21:36:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.155 21:36:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:44.155 21:36:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:44.413 true 00:06:44.413 21:36:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:44.413 21:36:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.350 21:36:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.610 21:36:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:45.610 21:36:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:45.610 true 00:06:45.610 21:36:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:45.610 21:36:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.869 21:36:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.128 21:36:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:46.128 21:36:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:46.387 true 00:06:46.387 21:36:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:46.387 21:36:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.323 21:36:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.583 21:36:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:47.583 21:36:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:47.842 true 00:06:47.842 21:36:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:47.842 21:36:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.668 21:36:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.668 21:36:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:48.668 21:36:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:48.927 true 00:06:48.927 21:36:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:48.927 21:36:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.864 21:36:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.864 21:36:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:49.864 21:36:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:50.123 true 00:06:50.123 21:36:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:50.123 21:36:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.059 21:36:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.059 21:36:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:51.059 21:36:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:51.318 true 00:06:51.318 21:36:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:51.318 21:36:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.254 21:36:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.254 21:36:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:52.254 21:36:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:52.512 true 00:06:52.513 21:36:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:52.513 21:36:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.447 21:36:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.706 21:36:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:53.706 21:36:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:53.706 true 00:06:53.706 21:36:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:53.706 21:36:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.965 21:36:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.224 21:36:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:54.224 21:36:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:54.224 true 00:06:54.483 21:36:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:54.483 21:36:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.420 21:36:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.679 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.679 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.679 21:36:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:55.679 21:36:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:55.679 true 00:06:55.938 21:36:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:55.938 21:36:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.505 21:36:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.763 21:36:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:56.763 21:36:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:57.022 true 00:06:57.022 21:36:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:57.022 21:36:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.958 21:36:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.958 21:36:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:57.958 21:36:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:58.217 true 00:06:58.217 21:36:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:58.217 21:36:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.153 21:36:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.153 21:36:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:59.153 21:36:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:59.412 true 00:06:59.412 21:36:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:06:59.412 21:36:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.349 21:36:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.349 21:36:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:00.349 21:36:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:00.608 true 00:07:00.608 21:36:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:07:00.608 21:36:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.867 21:36:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.125 21:36:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:01.125 21:36:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:01.125 true 00:07:01.125 21:36:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:07:01.125 21:36:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.384 21:36:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.643 21:36:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:01.643 21:36:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:01.903 true 00:07:01.903 21:36:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:07:01.903 21:36:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.903 21:36:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.162 21:36:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:02.162 21:36:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:02.162 Initializing NVMe Controllers 00:07:02.162 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.162 Controller IO queue size 128, less than required. 00:07:02.162 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.162 Controller IO queue size 128, less than required. 00:07:02.162 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.162 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:02.162 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:02.162 Initialization complete. Launching workers. 00:07:02.162 ======================================================== 00:07:02.162 Latency(us) 00:07:02.162 Device Information : IOPS MiB/s Average min max 00:07:02.162 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5299.97 2.59 21830.19 884.92 1007060.85 00:07:02.162 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 36079.93 17.62 3547.45 2091.15 287234.64 00:07:02.162 ======================================================== 00:07:02.162 Total : 41379.90 20.21 5889.12 884.92 1007060.85 00:07:02.162 00:07:02.421 true 00:07:02.421 21:36:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2866162 00:07:02.421 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2866162) - No such process 00:07:02.421 21:36:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2866162 00:07:02.421 21:36:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.680 21:36:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.680 21:36:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:02.680 21:36:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:02.680 21:36:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:02.680 21:36:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:02.680 21:36:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:02.938 null0 00:07:02.938 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:02.938 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:02.938 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:03.197 null1 00:07:03.197 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.197 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.197 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:03.197 null2 00:07:03.454 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.454 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.454 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:03.454 null3 00:07:03.454 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.454 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.454 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:03.713 null4 00:07:03.713 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.713 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.713 21:36:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:03.972 null5 00:07:03.972 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.972 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.972 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:03.972 null6 00:07:03.972 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:03.972 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:03.972 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:04.232 null7 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.232 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:04.233 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2872676 2872677 2872679 2872681 2872683 2872685 2872686 2872690 00:07:04.493 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.493 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:04.493 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:04.493 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:04.493 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:04.493 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:04.493 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:04.493 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:04.753 21:36:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:05.012 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:05.012 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.012 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.012 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:05.012 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:05.012 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:05.012 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.012 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:05.012 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.012 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.012 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.013 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:05.272 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:05.272 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.272 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.272 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:05.272 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:05.272 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.272 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:05.272 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.532 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:05.791 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:05.791 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:05.791 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.791 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:05.791 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:05.791 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.791 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.791 21:36:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.791 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.051 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.051 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.051 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.051 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.051 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.051 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.051 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.051 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.051 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.051 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.051 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.051 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.051 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.311 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.571 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.571 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.571 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.571 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.571 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.571 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.571 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.571 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.571 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.571 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.571 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.830 21:36:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.830 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.831 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.831 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.831 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.831 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.831 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.090 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.350 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.350 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.350 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.350 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.350 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.350 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.350 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.350 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.350 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.350 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.350 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.609 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.869 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.869 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.869 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.869 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.869 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.869 21:36:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.869 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.132 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.132 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.132 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.132 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.132 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.132 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.132 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.132 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.132 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.132 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:08.521 rmmod nvme_rdma 00:07:08.521 rmmod nvme_fabrics 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 2865732 ']' 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 2865732 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2865732 ']' 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2865732 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2865732 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2865732' 00:07:08.521 killing process with pid 2865732 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2865732 00:07:08.521 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2865732 00:07:08.813 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:08.813 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:07:08.813 00:07:08.813 real 0m48.236s 00:07:08.813 user 3m19.131s 00:07:08.813 sys 0m14.265s 00:07:08.813 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.813 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:08.813 ************************************ 00:07:08.813 END TEST nvmf_ns_hotplug_stress 00:07:08.813 ************************************ 00:07:08.813 21:36:40 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:08.813 21:36:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.813 21:36:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.813 21:36:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:08.813 ************************************ 00:07:08.813 START TEST nvmf_delete_subsystem 00:07:08.813 ************************************ 00:07:08.813 21:36:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:08.813 * Looking for test storage... 00:07:08.813 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:08.813 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.813 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.813 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.073 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:09.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.074 --rc genhtml_branch_coverage=1 00:07:09.074 --rc genhtml_function_coverage=1 00:07:09.074 --rc genhtml_legend=1 00:07:09.074 --rc geninfo_all_blocks=1 00:07:09.074 --rc geninfo_unexecuted_blocks=1 00:07:09.074 00:07:09.074 ' 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:09.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.074 --rc genhtml_branch_coverage=1 00:07:09.074 --rc genhtml_function_coverage=1 00:07:09.074 --rc genhtml_legend=1 00:07:09.074 --rc geninfo_all_blocks=1 00:07:09.074 --rc geninfo_unexecuted_blocks=1 00:07:09.074 00:07:09.074 ' 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:09.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.074 --rc genhtml_branch_coverage=1 00:07:09.074 --rc genhtml_function_coverage=1 00:07:09.074 --rc genhtml_legend=1 00:07:09.074 --rc geninfo_all_blocks=1 00:07:09.074 --rc geninfo_unexecuted_blocks=1 00:07:09.074 00:07:09.074 ' 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:09.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.074 --rc genhtml_branch_coverage=1 00:07:09.074 --rc genhtml_function_coverage=1 00:07:09.074 --rc genhtml_legend=1 00:07:09.074 --rc geninfo_all_blocks=1 00:07:09.074 --rc geninfo_unexecuted_blocks=1 00:07:09.074 00:07:09.074 ' 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:09.074 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:09.074 21:36:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:15.645 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:15.645 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:15.645 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.645 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:15.646 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # rdma_device_init 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:15.646 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@526 -- # allocate_nic_ips 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:15.907 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:15.907 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:15.907 altname enp217s0f0np0 00:07:15.907 altname ens818f0np0 00:07:15.907 inet 192.168.100.8/24 scope global mlx_0_0 00:07:15.907 valid_lft forever preferred_lft forever 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:15.907 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:15.907 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:15.907 altname enp217s0f1np1 00:07:15.907 altname ens818f1np1 00:07:15.907 inet 192.168.100.9/24 scope global mlx_0_1 00:07:15.907 valid_lft forever preferred_lft forever 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:15.907 21:36:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:07:15.907 192.168.100.9' 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # head -n 1 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:07:15.907 192.168.100.9' 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:07:15.907 192.168.100.9' 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # tail -n +2 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # head -n 1 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=2876940 00:07:15.907 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 2876940 00:07:15.908 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2876940 ']' 00:07:15.908 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.908 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.908 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.908 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.908 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.908 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:15.908 [2024-11-29 21:36:48.137337] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:15.908 [2024-11-29 21:36:48.137392] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.167 [2024-11-29 21:36:48.207690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.167 [2024-11-29 21:36:48.246195] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.167 [2024-11-29 21:36:48.246238] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.168 [2024-11-29 21:36:48.246247] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.168 [2024-11-29 21:36:48.246258] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.168 [2024-11-29 21:36:48.246281] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.168 [2024-11-29 21:36:48.246371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.168 [2024-11-29 21:36:48.246373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.168 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.168 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:16.168 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:16.168 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.168 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.168 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.168 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:16.168 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.168 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.168 [2024-11-29 21:36:48.404995] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6ee910/0x6f2dc0) succeed. 00:07:16.168 [2024-11-29 21:36:48.414126] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6efdc0/0x734460) succeed. 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.428 [2024-11-29 21:36:48.502585] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.428 NULL1 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.428 Delay0 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2877069 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:16.428 21:36:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:16.428 [2024-11-29 21:36:48.616539] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:18.331 21:36:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.331 21:36:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.331 21:36:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.705 NVMe io qpair process completion error 00:07:19.705 NVMe io qpair process completion error 00:07:19.705 NVMe io qpair process completion error 00:07:19.705 NVMe io qpair process completion error 00:07:19.705 NVMe io qpair process completion error 00:07:19.705 NVMe io qpair process completion error 00:07:19.705 NVMe io qpair process completion error 00:07:19.705 21:36:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.705 21:36:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:19.705 21:36:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2877069 00:07:19.705 21:36:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:19.964 21:36:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:19.964 21:36:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2877069 00:07:19.964 21:36:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Write completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.534 Read completed with error (sct=0, sc=8) 00:07:20.534 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 starting I/O failed: -6 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Write completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Read completed with error (sct=0, sc=8) 00:07:20.535 Initializing NVMe Controllers 00:07:20.535 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:20.535 Controller IO queue size 128, less than required. 00:07:20.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:20.535 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:20.535 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:20.535 Initialization complete. Launching workers. 00:07:20.535 ======================================================== 00:07:20.535 Latency(us) 00:07:20.535 Device Information : IOPS MiB/s Average min max 00:07:20.535 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.49 0.04 1593367.50 1000090.00 2975347.73 00:07:20.535 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.49 0.04 1594928.32 1001131.59 2976868.94 00:07:20.535 ======================================================== 00:07:20.535 Total : 160.99 0.08 1594147.91 1000090.00 2976868.94 00:07:20.535 00:07:20.535 [2024-11-29 21:36:52.702190] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:07:20.535 21:36:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:20.535 21:36:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2877069 00:07:20.535 21:36:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:20.535 [2024-11-29 21:36:52.716404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:20.535 [2024-11-29 21:36:52.716429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:07:20.535 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2877069 00:07:21.105 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2877069) - No such process 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2877069 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2877069 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2877069 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.105 [2024-11-29 21:36:53.232028] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.105 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.106 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.106 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2877905 00:07:21.106 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:21.106 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:21.106 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:21.106 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:21.106 [2024-11-29 21:36:53.319630] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:21.674 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:21.674 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:21.674 21:36:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:22.243 21:36:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:22.243 21:36:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:22.243 21:36:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:22.812 21:36:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:22.812 21:36:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:22.812 21:36:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:23.070 21:36:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.070 21:36:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:23.070 21:36:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:23.637 21:36:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.637 21:36:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:23.637 21:36:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.205 21:36:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.205 21:36:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:24.205 21:36:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.773 21:36:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.773 21:36:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:24.773 21:36:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.342 21:36:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.342 21:36:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:25.342 21:36:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.601 21:36:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.601 21:36:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:25.601 21:36:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.170 21:36:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.170 21:36:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:26.170 21:36:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.738 21:36:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.738 21:36:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:26.738 21:36:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.307 21:36:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.307 21:36:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:27.307 21:36:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.874 21:36:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.874 21:36:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:27.874 21:36:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.133 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.133 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:28.133 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.392 Initializing NVMe Controllers 00:07:28.392 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:28.392 Controller IO queue size 128, less than required. 00:07:28.392 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.392 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:28.392 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:28.392 Initialization complete. Launching workers. 00:07:28.392 ======================================================== 00:07:28.392 Latency(us) 00:07:28.392 Device Information : IOPS MiB/s Average min max 00:07:28.392 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001125.71 1000053.39 1003561.91 00:07:28.392 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002283.27 1000105.54 1005937.12 00:07:28.392 ======================================================== 00:07:28.392 Total : 256.00 0.12 1001704.49 1000053.39 1005937.12 00:07:28.392 00:07:28.651 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.651 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2877905 00:07:28.651 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2877905) - No such process 00:07:28.651 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2877905 00:07:28.651 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:28.651 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:28.651 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:28.651 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:28.651 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:28.651 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:28.651 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:28.651 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.651 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:28.651 rmmod nvme_rdma 00:07:28.651 rmmod nvme_fabrics 00:07:28.652 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.652 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:28.652 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:28.652 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 2876940 ']' 00:07:28.652 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 2876940 00:07:28.652 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2876940 ']' 00:07:28.652 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2876940 00:07:28.652 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:28.652 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.652 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2876940 00:07:28.911 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.911 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.911 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2876940' 00:07:28.911 killing process with pid 2876940 00:07:28.911 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2876940 00:07:28.911 21:37:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2876940 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:07:29.171 00:07:29.171 real 0m20.235s 00:07:29.171 user 0m49.004s 00:07:29.171 sys 0m6.520s 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.171 ************************************ 00:07:29.171 END TEST nvmf_delete_subsystem 00:07:29.171 ************************************ 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.171 ************************************ 00:07:29.171 START TEST nvmf_host_management 00:07:29.171 ************************************ 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:29.171 * Looking for test storage... 00:07:29.171 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:29.171 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:29.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.432 --rc genhtml_branch_coverage=1 00:07:29.432 --rc genhtml_function_coverage=1 00:07:29.432 --rc genhtml_legend=1 00:07:29.432 --rc geninfo_all_blocks=1 00:07:29.432 --rc geninfo_unexecuted_blocks=1 00:07:29.432 00:07:29.432 ' 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:29.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.432 --rc genhtml_branch_coverage=1 00:07:29.432 --rc genhtml_function_coverage=1 00:07:29.432 --rc genhtml_legend=1 00:07:29.432 --rc geninfo_all_blocks=1 00:07:29.432 --rc geninfo_unexecuted_blocks=1 00:07:29.432 00:07:29.432 ' 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:29.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.432 --rc genhtml_branch_coverage=1 00:07:29.432 --rc genhtml_function_coverage=1 00:07:29.432 --rc genhtml_legend=1 00:07:29.432 --rc geninfo_all_blocks=1 00:07:29.432 --rc geninfo_unexecuted_blocks=1 00:07:29.432 00:07:29.432 ' 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:29.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.432 --rc genhtml_branch_coverage=1 00:07:29.432 --rc genhtml_function_coverage=1 00:07:29.432 --rc genhtml_legend=1 00:07:29.432 --rc geninfo_all_blocks=1 00:07:29.432 --rc geninfo_unexecuted_blocks=1 00:07:29.432 00:07:29.432 ' 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.432 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.433 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.433 21:37:01 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:36.002 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:36.002 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:36.002 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:36.002 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # rdma_device_init 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:36.002 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@526 -- # allocate_nic_ips 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:36.003 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:36.003 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:36.003 altname enp217s0f0np0 00:07:36.003 altname ens818f0np0 00:07:36.003 inet 192.168.100.8/24 scope global mlx_0_0 00:07:36.003 valid_lft forever preferred_lft forever 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:36.003 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:36.003 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:36.003 altname enp217s0f1np1 00:07:36.003 altname ens818f1np1 00:07:36.003 inet 192.168.100.9/24 scope global mlx_0_1 00:07:36.003 valid_lft forever preferred_lft forever 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:36.003 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:07:36.263 192.168.100.9' 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:07:36.263 192.168.100.9' 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # head -n 1 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:07:36.263 192.168.100.9' 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # head -n 1 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # tail -n +2 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=2882553 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 2882553 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2882553 ']' 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.263 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.263 [2024-11-29 21:37:08.355125] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.263 [2024-11-29 21:37:08.355177] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.263 [2024-11-29 21:37:08.426324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.263 [2024-11-29 21:37:08.466488] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.263 [2024-11-29 21:37:08.466528] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.263 [2024-11-29 21:37:08.466540] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.263 [2024-11-29 21:37:08.466548] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.263 [2024-11-29 21:37:08.466555] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.263 [2024-11-29 21:37:08.466657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.263 [2024-11-29 21:37:08.466757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.263 [2024-11-29 21:37:08.466867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.263 [2024-11-29 21:37:08.466868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:36.523 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.523 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:36.523 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:36.523 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:36.523 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.523 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.523 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:36.523 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.523 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.523 [2024-11-29 21:37:08.654583] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1569250/0x156d700) succeed. 00:07:36.523 [2024-11-29 21:37:08.666115] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x156a840/0x15aeda0) succeed. 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.782 Malloc0 00:07:36.782 [2024-11-29 21:37:08.846240] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2882834 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2882834 /var/tmp/bdevperf.sock 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2882834 ']' 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:36.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:36.782 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:36.782 { 00:07:36.782 "params": { 00:07:36.782 "name": "Nvme$subsystem", 00:07:36.782 "trtype": "$TEST_TRANSPORT", 00:07:36.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:36.782 "adrfam": "ipv4", 00:07:36.782 "trsvcid": "$NVMF_PORT", 00:07:36.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:36.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:36.782 "hdgst": ${hdgst:-false}, 00:07:36.782 "ddgst": ${ddgst:-false} 00:07:36.782 }, 00:07:36.782 "method": "bdev_nvme_attach_controller" 00:07:36.782 } 00:07:36.782 EOF 00:07:36.782 )") 00:07:36.783 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:36.783 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:36.783 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:36.783 21:37:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:36.783 "params": { 00:07:36.783 "name": "Nvme0", 00:07:36.783 "trtype": "rdma", 00:07:36.783 "traddr": "192.168.100.8", 00:07:36.783 "adrfam": "ipv4", 00:07:36.783 "trsvcid": "4420", 00:07:36.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:36.783 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:36.783 "hdgst": false, 00:07:36.783 "ddgst": false 00:07:36.783 }, 00:07:36.783 "method": "bdev_nvme_attach_controller" 00:07:36.783 }' 00:07:36.783 [2024-11-29 21:37:08.948798] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.783 [2024-11-29 21:37:08.948854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882834 ] 00:07:36.783 [2024-11-29 21:37:09.020755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.041 [2024-11-29 21:37:09.059619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.041 Running I/O for 10 seconds... 00:07:37.041 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.041 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:37.041 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:37.041 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.041 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.299 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=171 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 171 -ge 100 ']' 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.300 21:37:09 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:38.237 296.00 IOPS, 18.50 MiB/s [2024-11-29T20:37:10.485Z] [2024-11-29 21:37:10.362358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff80 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcff00 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfe80 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafe00 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fd80 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fd00 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fc80 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6fc00 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5fb80 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4fb00 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3fa80 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b2fa00 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b1f980 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b0f900 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aff880 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aef800 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000adf780 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000acf700 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000abf680 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aaf600 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a9f580 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a8f500 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a7f480 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a6f400 len:0x10000 key:0x181d00 00:07:38.237 [2024-11-29 21:37:10.362879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.237 [2024-11-29 21:37:10.362890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a5f380 len:0x10000 key:0x181d00 00:07:38.238 [2024-11-29 21:37:10.362899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.362909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a4f300 len:0x10000 key:0x181d00 00:07:38.238 [2024-11-29 21:37:10.362918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.362928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a3f280 len:0x10000 key:0x181d00 00:07:38.238 [2024-11-29 21:37:10.362937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.362948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a2f200 len:0x10000 key:0x181d00 00:07:38.238 [2024-11-29 21:37:10.362957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.362968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a1f180 len:0x10000 key:0x181d00 00:07:38.238 [2024-11-29 21:37:10.362976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.362987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a0f100 len:0x10000 key:0x181d00 00:07:38.238 [2024-11-29 21:37:10.362996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000df0000 len:0x10000 key:0x182a00 00:07:38.238 [2024-11-29 21:37:10.363017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ddff80 len:0x10000 key:0x182a00 00:07:38.238 [2024-11-29 21:37:10.363036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dcff00 len:0x10000 key:0x182a00 00:07:38.238 [2024-11-29 21:37:10.363055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dbfe80 len:0x10000 key:0x182a00 00:07:38.238 [2024-11-29 21:37:10.363076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dafe00 len:0x10000 key:0x182a00 00:07:38.238 [2024-11-29 21:37:10.363095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d9fd80 len:0x10000 key:0x182a00 00:07:38.238 [2024-11-29 21:37:10.363114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d8fd00 len:0x10000 key:0x182a00 00:07:38.238 [2024-11-29 21:37:10.363133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d7fc80 len:0x10000 key:0x182a00 00:07:38.238 [2024-11-29 21:37:10.363152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d6fc00 len:0x10000 key:0x182a00 00:07:38.238 [2024-11-29 21:37:10.363171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d5fb80 len:0x10000 key:0x182a00 00:07:38.238 [2024-11-29 21:37:10.363190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab4000 len:0x10000 key:0x182800 00:07:38.238 [2024-11-29 21:37:10.363209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad5000 len:0x10000 key:0x182800 00:07:38.238 [2024-11-29 21:37:10.363230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x182800 00:07:38.238 [2024-11-29 21:37:10.363252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x182800 00:07:38.238 [2024-11-29 21:37:10.363271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x182800 00:07:38.238 [2024-11-29 21:37:10.363290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdcc000 len:0x10000 key:0x182800 00:07:38.238 [2024-11-29 21:37:10.363310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdab000 len:0x10000 key:0x182800 00:07:38.238 [2024-11-29 21:37:10.363329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8a000 len:0x10000 key:0x182800 00:07:38.238 [2024-11-29 21:37:10.363348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5e7000 len:0x10000 key:0x182800 00:07:38.238 [2024-11-29 21:37:10.363367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5c6000 len:0x10000 key:0x182800 00:07:38.238 [2024-11-29 21:37:10.363387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5a5000 len:0x10000 key:0x182800 00:07:38.238 [2024-11-29 21:37:10.363406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d584000 len:0x10000 key:0x182800 00:07:38.238 [2024-11-29 21:37:10.363425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.238 [2024-11-29 21:37:10.363435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d563000 len:0x10000 key:0x182800 00:07:38.239 [2024-11-29 21:37:10.363446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.239 [2024-11-29 21:37:10.363456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d542000 len:0x10000 key:0x182800 00:07:38.239 [2024-11-29 21:37:10.363465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.239 [2024-11-29 21:37:10.363475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d521000 len:0x10000 key:0x182800 00:07:38.239 [2024-11-29 21:37:10.363484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.239 [2024-11-29 21:37:10.363494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d500000 len:0x10000 key:0x182800 00:07:38.239 [2024-11-29 21:37:10.363503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.239 [2024-11-29 21:37:10.363514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8ff000 len:0x10000 key:0x182800 00:07:38.239 [2024-11-29 21:37:10.363523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.239 [2024-11-29 21:37:10.363533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8de000 len:0x10000 key:0x182800 00:07:38.239 [2024-11-29 21:37:10.363542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.239 [2024-11-29 21:37:10.363552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8bd000 len:0x10000 key:0x182800 00:07:38.239 [2024-11-29 21:37:10.363561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.239 [2024-11-29 21:37:10.363571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d89c000 len:0x10000 key:0x182800 00:07:38.239 [2024-11-29 21:37:10.363580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.239 [2024-11-29 21:37:10.363591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d87b000 len:0x10000 key:0x182800 00:07:38.239 [2024-11-29 21:37:10.363599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.239 [2024-11-29 21:37:10.363609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d85a000 len:0x10000 key:0x182800 00:07:38.239 [2024-11-29 21:37:10.363618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.239 [2024-11-29 21:37:10.363629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d839000 len:0x10000 key:0x182800 00:07:38.239 [2024-11-29 21:37:10.363638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.239 [2024-11-29 21:37:10.363648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d818000 len:0x10000 key:0x182800 00:07:38.239 [2024-11-29 21:37:10.363659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:57967 cdw0:65b93000 sqhd:ae88 p:1 m:0 dnr:0 00:07:38.239 [2024-11-29 21:37:10.365648] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae5080 was disconnected and freed. reset controller. 00:07:38.239 [2024-11-29 21:37:10.366600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:38.239 task offset: 40960 on job bdev=Nvme0n1 fails 00:07:38.239 00:07:38.239 Latency(us) 00:07:38.239 [2024-11-29T20:37:10.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.239 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:38.239 Job: Nvme0n1 ended in about 1.12 seconds with error 00:07:38.239 Verification LBA range: start 0x0 length 0x400 00:07:38.239 Nvme0n1 : 1.12 264.01 16.50 57.08 0.00 197660.73 2306.87 1020054.73 00:07:38.239 [2024-11-29T20:37:10.487Z] =================================================================================================================== 00:07:38.239 [2024-11-29T20:37:10.487Z] Total : 264.01 16.50 57.08 0.00 197660.73 2306.87 1020054.73 00:07:38.239 [2024-11-29 21:37:10.369238] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.239 21:37:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2882834 00:07:38.239 21:37:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:38.239 21:37:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:38.239 21:37:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:38.239 21:37:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:07:38.239 21:37:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:07:38.239 21:37:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:38.239 21:37:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:38.239 { 00:07:38.239 "params": { 00:07:38.239 "name": "Nvme$subsystem", 00:07:38.239 "trtype": "$TEST_TRANSPORT", 00:07:38.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:38.239 "adrfam": "ipv4", 00:07:38.239 "trsvcid": "$NVMF_PORT", 00:07:38.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:38.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:38.239 "hdgst": ${hdgst:-false}, 00:07:38.239 "ddgst": ${ddgst:-false} 00:07:38.239 }, 00:07:38.239 "method": "bdev_nvme_attach_controller" 00:07:38.239 } 00:07:38.239 EOF 00:07:38.239 )") 00:07:38.239 21:37:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:07:38.239 21:37:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:07:38.239 21:37:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:07:38.239 21:37:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:38.239 "params": { 00:07:38.239 "name": "Nvme0", 00:07:38.239 "trtype": "rdma", 00:07:38.239 "traddr": "192.168.100.8", 00:07:38.239 "adrfam": "ipv4", 00:07:38.239 "trsvcid": "4420", 00:07:38.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:38.239 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:38.239 "hdgst": false, 00:07:38.239 "ddgst": false 00:07:38.239 }, 00:07:38.239 "method": "bdev_nvme_attach_controller" 00:07:38.239 }' 00:07:38.239 [2024-11-29 21:37:10.440530] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:38.239 [2024-11-29 21:37:10.440595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883075 ] 00:07:38.498 [2024-11-29 21:37:10.514596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.498 [2024-11-29 21:37:10.553423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.498 Running I/O for 1 seconds... 00:07:39.876 3117.00 IOPS, 194.81 MiB/s 00:07:39.876 Latency(us) 00:07:39.876 [2024-11-29T20:37:12.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.876 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:39.876 Verification LBA range: start 0x0 length 0x400 00:07:39.876 Nvme0n1 : 1.01 3138.55 196.16 0.00 0.00 19984.83 345.70 39845.89 00:07:39.876 [2024-11-29T20:37:12.124Z] =================================================================================================================== 00:07:39.876 [2024-11-29T20:37:12.124Z] Total : 3138.55 196.16 0.00 0.00 19984.83 345.70 39845.89 00:07:39.877 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2882834 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:07:39.877 21:37:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:39.877 21:37:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:39.877 21:37:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:39.877 21:37:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:39.877 21:37:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:39.877 21:37:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:39.877 21:37:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:39.877 21:37:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:39.877 21:37:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:39.877 21:37:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:39.877 21:37:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:39.877 21:37:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:39.877 rmmod nvme_rdma 00:07:39.877 rmmod nvme_fabrics 00:07:39.877 21:37:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 2882553 ']' 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 2882553 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2882553 ']' 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2882553 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2882553 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2882553' 00:07:39.877 killing process with pid 2882553 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2882553 00:07:39.877 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2882553 00:07:40.136 [2024-11-29 21:37:12.325984] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:40.136 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:40.136 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:07:40.136 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:40.136 00:07:40.136 real 0m11.104s 00:07:40.136 user 0m19.967s 00:07:40.136 sys 0m6.237s 00:07:40.136 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.136 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:40.136 ************************************ 00:07:40.136 END TEST nvmf_host_management 00:07:40.136 ************************************ 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.397 ************************************ 00:07:40.397 START TEST nvmf_lvol 00:07:40.397 ************************************ 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:40.397 * Looking for test storage... 00:07:40.397 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.397 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:40.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.398 --rc genhtml_branch_coverage=1 00:07:40.398 --rc genhtml_function_coverage=1 00:07:40.398 --rc genhtml_legend=1 00:07:40.398 --rc geninfo_all_blocks=1 00:07:40.398 --rc geninfo_unexecuted_blocks=1 00:07:40.398 00:07:40.398 ' 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:40.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.398 --rc genhtml_branch_coverage=1 00:07:40.398 --rc genhtml_function_coverage=1 00:07:40.398 --rc genhtml_legend=1 00:07:40.398 --rc geninfo_all_blocks=1 00:07:40.398 --rc geninfo_unexecuted_blocks=1 00:07:40.398 00:07:40.398 ' 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:40.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.398 --rc genhtml_branch_coverage=1 00:07:40.398 --rc genhtml_function_coverage=1 00:07:40.398 --rc genhtml_legend=1 00:07:40.398 --rc geninfo_all_blocks=1 00:07:40.398 --rc geninfo_unexecuted_blocks=1 00:07:40.398 00:07:40.398 ' 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:40.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.398 --rc genhtml_branch_coverage=1 00:07:40.398 --rc genhtml_function_coverage=1 00:07:40.398 --rc genhtml_legend=1 00:07:40.398 --rc geninfo_all_blocks=1 00:07:40.398 --rc geninfo_unexecuted_blocks=1 00:07:40.398 00:07:40.398 ' 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.398 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.659 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:40.659 21:37:12 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:48.785 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:48.785 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:07:48.785 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:48.786 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:48.786 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # rdma_device_init 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@526 -- # allocate_nic_ips 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:48.786 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:48.786 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:48.786 altname enp217s0f0np0 00:07:48.786 altname ens818f0np0 00:07:48.786 inet 192.168.100.8/24 scope global mlx_0_0 00:07:48.786 valid_lft forever preferred_lft forever 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:48.786 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:48.786 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:48.786 altname enp217s0f1np1 00:07:48.786 altname ens818f1np1 00:07:48.786 inet 192.168.100.9/24 scope global mlx_0_1 00:07:48.786 valid_lft forever preferred_lft forever 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:48.786 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:07:48.786 192.168.100.9' 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:07:48.787 192.168.100.9' 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # head -n 1 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:07:48.787 192.168.100.9' 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # head -n 1 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # tail -n +2 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=2886841 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 2886841 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2886841 ']' 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.787 21:37:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:48.787 [2024-11-29 21:37:19.900499] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:48.787 [2024-11-29 21:37:19.900552] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.787 [2024-11-29 21:37:19.971676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.787 [2024-11-29 21:37:20.017888] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.787 [2024-11-29 21:37:20.017931] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.787 [2024-11-29 21:37:20.017942] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.787 [2024-11-29 21:37:20.017951] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.787 [2024-11-29 21:37:20.017958] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.787 [2024-11-29 21:37:20.018011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.787 [2024-11-29 21:37:20.018085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.787 [2024-11-29 21:37:20.018086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.787 21:37:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.787 21:37:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:48.787 21:37:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:48.787 21:37:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:48.787 21:37:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.787 21:37:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.787 21:37:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:48.787 [2024-11-29 21:37:20.364334] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15e1410/0x15e58c0) succeed. 00:07:48.787 [2024-11-29 21:37:20.374646] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15e2960/0x1626f60) succeed. 00:07:48.787 21:37:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:48.787 21:37:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:48.787 21:37:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:48.787 21:37:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:48.787 21:37:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:49.045 21:37:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:49.303 21:37:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=25871ff3-2fa6-47bf-b74d-14423b663457 00:07:49.303 21:37:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 25871ff3-2fa6-47bf-b74d-14423b663457 lvol 20 00:07:49.303 21:37:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=247dd8d0-e7eb-4b33-b892-a4aa8d8fca21 00:07:49.303 21:37:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:49.561 21:37:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 247dd8d0-e7eb-4b33-b892-a4aa8d8fca21 00:07:49.821 21:37:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:49.821 [2024-11-29 21:37:22.048412] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:50.080 21:37:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:50.080 21:37:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2887185 00:07:50.080 21:37:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:50.080 21:37:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:51.076 21:37:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 247dd8d0-e7eb-4b33-b892-a4aa8d8fca21 MY_SNAPSHOT 00:07:51.335 21:37:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=dd64e654-537e-46e1-84fc-c0ea94f78bcf 00:07:51.335 21:37:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 247dd8d0-e7eb-4b33-b892-a4aa8d8fca21 30 00:07:51.592 21:37:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone dd64e654-537e-46e1-84fc-c0ea94f78bcf MY_CLONE 00:07:51.851 21:37:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6a937255-72af-4fa4-9ebb-fd7e6d2cb106 00:07:51.851 21:37:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6a937255-72af-4fa4-9ebb-fd7e6d2cb106 00:07:52.109 21:37:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2887185 00:08:02.083 Initializing NVMe Controllers 00:08:02.083 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:02.083 Controller IO queue size 128, less than required. 00:08:02.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:02.083 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:02.083 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:02.083 Initialization complete. Launching workers. 00:08:02.083 ======================================================== 00:08:02.083 Latency(us) 00:08:02.083 Device Information : IOPS MiB/s Average min max 00:08:02.083 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16353.80 63.88 7828.18 2053.49 41308.40 00:08:02.083 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16284.80 63.61 7861.10 3499.00 48867.00 00:08:02.083 ======================================================== 00:08:02.083 Total : 32638.60 127.49 7844.60 2053.49 48867.00 00:08:02.083 00:08:02.083 21:37:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:02.083 21:37:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 247dd8d0-e7eb-4b33-b892-a4aa8d8fca21 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 25871ff3-2fa6-47bf-b74d-14423b663457 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:02.083 rmmod nvme_rdma 00:08:02.083 rmmod nvme_fabrics 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 2886841 ']' 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 2886841 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2886841 ']' 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2886841 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.083 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2886841 00:08:02.343 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.343 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.343 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2886841' 00:08:02.343 killing process with pid 2886841 00:08:02.343 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2886841 00:08:02.343 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2886841 00:08:02.603 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:02.603 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:08:02.603 00:08:02.603 real 0m22.234s 00:08:02.603 user 1m10.556s 00:08:02.603 sys 0m6.714s 00:08:02.603 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.603 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.603 ************************************ 00:08:02.603 END TEST nvmf_lvol 00:08:02.603 ************************************ 00:08:02.603 21:37:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:02.603 21:37:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:02.603 21:37:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.603 21:37:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.603 ************************************ 00:08:02.603 START TEST nvmf_lvs_grow 00:08:02.603 ************************************ 00:08:02.603 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:02.603 * Looking for test storage... 00:08:02.603 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:02.603 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:02.603 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:02.603 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:02.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.862 --rc genhtml_branch_coverage=1 00:08:02.862 --rc genhtml_function_coverage=1 00:08:02.862 --rc genhtml_legend=1 00:08:02.862 --rc geninfo_all_blocks=1 00:08:02.862 --rc geninfo_unexecuted_blocks=1 00:08:02.862 00:08:02.862 ' 00:08:02.862 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:02.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.863 --rc genhtml_branch_coverage=1 00:08:02.863 --rc genhtml_function_coverage=1 00:08:02.863 --rc genhtml_legend=1 00:08:02.863 --rc geninfo_all_blocks=1 00:08:02.863 --rc geninfo_unexecuted_blocks=1 00:08:02.863 00:08:02.863 ' 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:02.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.863 --rc genhtml_branch_coverage=1 00:08:02.863 --rc genhtml_function_coverage=1 00:08:02.863 --rc genhtml_legend=1 00:08:02.863 --rc geninfo_all_blocks=1 00:08:02.863 --rc geninfo_unexecuted_blocks=1 00:08:02.863 00:08:02.863 ' 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:02.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.863 --rc genhtml_branch_coverage=1 00:08:02.863 --rc genhtml_function_coverage=1 00:08:02.863 --rc genhtml_legend=1 00:08:02.863 --rc geninfo_all_blocks=1 00:08:02.863 --rc geninfo_unexecuted_blocks=1 00:08:02.863 00:08:02.863 ' 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.863 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:02.863 21:37:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:09.433 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:09.433 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:08:09.433 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:09.434 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:09.434 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # rdma_device_init 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@526 -- # allocate_nic_ips 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:09.434 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:09.434 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:09.434 altname enp217s0f0np0 00:08:09.434 altname ens818f0np0 00:08:09.434 inet 192.168.100.8/24 scope global mlx_0_0 00:08:09.434 valid_lft forever preferred_lft forever 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:09.434 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:09.434 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:09.434 altname enp217s0f1np1 00:08:09.434 altname ens818f1np1 00:08:09.434 inet 192.168.100.9/24 scope global mlx_0_1 00:08:09.434 valid_lft forever preferred_lft forever 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:09.434 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:08:09.435 192.168.100.9' 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:08:09.435 192.168.100.9' 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # head -n 1 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # head -n 1 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:08:09.435 192.168.100.9' 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # tail -n +2 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=2892713 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 2892713 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2892713 ']' 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.435 [2024-11-29 21:37:41.326452] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:09.435 [2024-11-29 21:37:41.326515] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.435 [2024-11-29 21:37:41.398783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.435 [2024-11-29 21:37:41.438316] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.435 [2024-11-29 21:37:41.438357] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.435 [2024-11-29 21:37:41.438366] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.435 [2024-11-29 21:37:41.438375] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.435 [2024-11-29 21:37:41.438382] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.435 [2024-11-29 21:37:41.438414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.435 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:09.694 [2024-11-29 21:37:41.760552] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17daf70/0x17df420) succeed. 00:08:09.694 [2024-11-29 21:37:41.770170] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17dc420/0x1820ac0) succeed. 00:08:09.694 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:09.694 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.694 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.694 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.694 ************************************ 00:08:09.694 START TEST lvs_grow_clean 00:08:09.694 ************************************ 00:08:09.694 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:09.695 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:09.695 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:09.695 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:09.695 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:09.695 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:09.695 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:09.695 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.695 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.695 21:37:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.954 21:37:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:09.954 21:37:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:10.213 21:37:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=16ac692d-019d-4222-815c-5d9ae74f3097 00:08:10.213 21:37:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16ac692d-019d-4222-815c-5d9ae74f3097 00:08:10.213 21:37:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:10.471 21:37:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:10.471 21:37:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:10.471 21:37:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 16ac692d-019d-4222-815c-5d9ae74f3097 lvol 150 00:08:10.472 21:37:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a5185b1b-f78a-4ca9-8702-ffe17684b854 00:08:10.472 21:37:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:10.472 21:37:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:10.730 [2024-11-29 21:37:42.836457] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:10.730 [2024-11-29 21:37:42.836510] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:10.730 true 00:08:10.730 21:37:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:10.730 21:37:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16ac692d-019d-4222-815c-5d9ae74f3097 00:08:10.989 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:10.990 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:10.990 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a5185b1b-f78a-4ca9-8702-ffe17684b854 00:08:11.248 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:11.506 [2024-11-29 21:37:43.570988] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:11.507 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:11.766 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:11.766 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2893040 00:08:11.766 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:11.766 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2893040 /var/tmp/bdevperf.sock 00:08:11.766 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2893040 ']' 00:08:11.766 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:11.766 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.766 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:11.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:11.766 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.766 21:37:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:11.766 [2024-11-29 21:37:43.803108] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:11.766 [2024-11-29 21:37:43.803162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893040 ] 00:08:11.766 [2024-11-29 21:37:43.874601] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.766 [2024-11-29 21:37:43.912515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.766 21:37:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.766 21:37:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:11.766 21:37:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:12.025 Nvme0n1 00:08:12.284 21:37:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:12.284 [ 00:08:12.284 { 00:08:12.284 "name": "Nvme0n1", 00:08:12.284 "aliases": [ 00:08:12.284 "a5185b1b-f78a-4ca9-8702-ffe17684b854" 00:08:12.284 ], 00:08:12.284 "product_name": "NVMe disk", 00:08:12.284 "block_size": 4096, 00:08:12.284 "num_blocks": 38912, 00:08:12.284 "uuid": "a5185b1b-f78a-4ca9-8702-ffe17684b854", 00:08:12.284 "numa_id": 1, 00:08:12.284 "assigned_rate_limits": { 00:08:12.284 "rw_ios_per_sec": 0, 00:08:12.284 "rw_mbytes_per_sec": 0, 00:08:12.284 "r_mbytes_per_sec": 0, 00:08:12.284 "w_mbytes_per_sec": 0 00:08:12.284 }, 00:08:12.284 "claimed": false, 00:08:12.284 "zoned": false, 00:08:12.284 "supported_io_types": { 00:08:12.284 "read": true, 00:08:12.284 "write": true, 00:08:12.284 "unmap": true, 00:08:12.284 "flush": true, 00:08:12.284 "reset": true, 00:08:12.284 "nvme_admin": true, 00:08:12.284 "nvme_io": true, 00:08:12.284 "nvme_io_md": false, 00:08:12.284 "write_zeroes": true, 00:08:12.284 "zcopy": false, 00:08:12.284 "get_zone_info": false, 00:08:12.284 "zone_management": false, 00:08:12.284 "zone_append": false, 00:08:12.284 "compare": true, 00:08:12.284 "compare_and_write": true, 00:08:12.284 "abort": true, 00:08:12.284 "seek_hole": false, 00:08:12.284 "seek_data": false, 00:08:12.284 "copy": true, 00:08:12.284 "nvme_iov_md": false 00:08:12.284 }, 00:08:12.284 "memory_domains": [ 00:08:12.284 { 00:08:12.284 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:12.284 "dma_device_type": 0 00:08:12.284 } 00:08:12.284 ], 00:08:12.284 "driver_specific": { 00:08:12.284 "nvme": [ 00:08:12.284 { 00:08:12.284 "trid": { 00:08:12.284 "trtype": "RDMA", 00:08:12.284 "adrfam": "IPv4", 00:08:12.284 "traddr": "192.168.100.8", 00:08:12.284 "trsvcid": "4420", 00:08:12.284 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:12.284 }, 00:08:12.284 "ctrlr_data": { 00:08:12.284 "cntlid": 1, 00:08:12.284 "vendor_id": "0x8086", 00:08:12.284 "model_number": "SPDK bdev Controller", 00:08:12.284 "serial_number": "SPDK0", 00:08:12.284 "firmware_revision": "24.09.1", 00:08:12.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:12.284 "oacs": { 00:08:12.284 "security": 0, 00:08:12.284 "format": 0, 00:08:12.284 "firmware": 0, 00:08:12.284 "ns_manage": 0 00:08:12.284 }, 00:08:12.284 "multi_ctrlr": true, 00:08:12.284 "ana_reporting": false 00:08:12.284 }, 00:08:12.284 "vs": { 00:08:12.284 "nvme_version": "1.3" 00:08:12.284 }, 00:08:12.284 "ns_data": { 00:08:12.284 "id": 1, 00:08:12.284 "can_share": true 00:08:12.284 } 00:08:12.284 } 00:08:12.284 ], 00:08:12.284 "mp_policy": "active_passive" 00:08:12.284 } 00:08:12.284 } 00:08:12.284 ] 00:08:12.284 21:37:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2893306 00:08:12.284 21:37:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:12.284 21:37:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:12.544 Running I/O for 10 seconds... 00:08:13.478 Latency(us) 00:08:13.478 [2024-11-29T20:37:45.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.478 Nvme0n1 : 1.00 34529.00 134.88 0.00 0.00 0.00 0.00 0.00 00:08:13.478 [2024-11-29T20:37:45.726Z] =================================================================================================================== 00:08:13.478 [2024-11-29T20:37:45.726Z] Total : 34529.00 134.88 0.00 0.00 0.00 0.00 0.00 00:08:13.478 00:08:14.413 21:37:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 16ac692d-019d-4222-815c-5d9ae74f3097 00:08:14.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.413 Nvme0n1 : 2.00 35040.00 136.88 0.00 0.00 0.00 0.00 0.00 00:08:14.413 [2024-11-29T20:37:46.661Z] =================================================================================================================== 00:08:14.413 [2024-11-29T20:37:46.661Z] Total : 35040.00 136.88 0.00 0.00 0.00 0.00 0.00 00:08:14.413 00:08:14.413 true 00:08:14.672 21:37:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16ac692d-019d-4222-815c-5d9ae74f3097 00:08:14.672 21:37:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:14.672 21:37:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:14.672 21:37:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:14.672 21:37:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2893306 00:08:15.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.609 Nvme0n1 : 3.00 35167.33 137.37 0.00 0.00 0.00 0.00 0.00 00:08:15.609 [2024-11-29T20:37:47.857Z] =================================================================================================================== 00:08:15.609 [2024-11-29T20:37:47.857Z] Total : 35167.33 137.37 0.00 0.00 0.00 0.00 0.00 00:08:15.609 00:08:16.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.546 Nvme0n1 : 4.00 35336.00 138.03 0.00 0.00 0.00 0.00 0.00 00:08:16.546 [2024-11-29T20:37:48.794Z] =================================================================================================================== 00:08:16.546 [2024-11-29T20:37:48.794Z] Total : 35336.00 138.03 0.00 0.00 0.00 0.00 0.00 00:08:16.546 00:08:17.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.483 Nvme0n1 : 5.00 35455.80 138.50 0.00 0.00 0.00 0.00 0.00 00:08:17.483 [2024-11-29T20:37:49.731Z] =================================================================================================================== 00:08:17.483 [2024-11-29T20:37:49.731Z] Total : 35455.80 138.50 0.00 0.00 0.00 0.00 0.00 00:08:17.483 00:08:18.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.420 Nvme0n1 : 6.00 35412.67 138.33 0.00 0.00 0.00 0.00 0.00 00:08:18.420 [2024-11-29T20:37:50.668Z] =================================================================================================================== 00:08:18.420 [2024-11-29T20:37:50.668Z] Total : 35412.67 138.33 0.00 0.00 0.00 0.00 0.00 00:08:18.420 00:08:19.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.356 Nvme0n1 : 7.00 35470.14 138.56 0.00 0.00 0.00 0.00 0.00 00:08:19.356 [2024-11-29T20:37:51.604Z] =================================================================================================================== 00:08:19.356 [2024-11-29T20:37:51.604Z] Total : 35470.14 138.56 0.00 0.00 0.00 0.00 0.00 00:08:19.356 00:08:20.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.733 Nvme0n1 : 8.00 35528.50 138.78 0.00 0.00 0.00 0.00 0.00 00:08:20.733 [2024-11-29T20:37:52.981Z] =================================================================================================================== 00:08:20.733 [2024-11-29T20:37:52.981Z] Total : 35528.50 138.78 0.00 0.00 0.00 0.00 0.00 00:08:20.733 00:08:21.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.669 Nvme0n1 : 9.00 35570.22 138.95 0.00 0.00 0.00 0.00 0.00 00:08:21.669 [2024-11-29T20:37:53.917Z] =================================================================================================================== 00:08:21.669 [2024-11-29T20:37:53.917Z] Total : 35570.22 138.95 0.00 0.00 0.00 0.00 0.00 00:08:21.669 00:08:22.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.605 Nvme0n1 : 10.00 35615.30 139.12 0.00 0.00 0.00 0.00 0.00 00:08:22.605 [2024-11-29T20:37:54.853Z] =================================================================================================================== 00:08:22.605 [2024-11-29T20:37:54.853Z] Total : 35615.30 139.12 0.00 0.00 0.00 0.00 0.00 00:08:22.605 00:08:22.605 00:08:22.605 Latency(us) 00:08:22.605 [2024-11-29T20:37:54.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.605 Nvme0n1 : 10.00 35613.13 139.11 0.00 0.00 3591.39 2686.98 15309.21 00:08:22.605 [2024-11-29T20:37:54.853Z] =================================================================================================================== 00:08:22.605 [2024-11-29T20:37:54.853Z] Total : 35613.13 139.11 0.00 0.00 3591.39 2686.98 15309.21 00:08:22.605 { 00:08:22.605 "results": [ 00:08:22.605 { 00:08:22.605 "job": "Nvme0n1", 00:08:22.605 "core_mask": "0x2", 00:08:22.605 "workload": "randwrite", 00:08:22.605 "status": "finished", 00:08:22.605 "queue_depth": 128, 00:08:22.605 "io_size": 4096, 00:08:22.605 "runtime": 10.003304, 00:08:22.605 "iops": 35613.13342071779, 00:08:22.605 "mibps": 139.11380242467888, 00:08:22.605 "io_failed": 0, 00:08:22.605 "io_timeout": 0, 00:08:22.605 "avg_latency_us": 3591.394820157811, 00:08:22.605 "min_latency_us": 2686.976, 00:08:22.605 "max_latency_us": 15309.2096 00:08:22.605 } 00:08:22.605 ], 00:08:22.605 "core_count": 1 00:08:22.605 } 00:08:22.605 21:37:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2893040 00:08:22.605 21:37:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2893040 ']' 00:08:22.605 21:37:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2893040 00:08:22.605 21:37:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:22.605 21:37:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.605 21:37:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2893040 00:08:22.605 21:37:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:22.605 21:37:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:22.605 21:37:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2893040' 00:08:22.605 killing process with pid 2893040 00:08:22.605 21:37:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2893040 00:08:22.605 Received shutdown signal, test time was about 10.000000 seconds 00:08:22.605 00:08:22.605 Latency(us) 00:08:22.605 [2024-11-29T20:37:54.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.605 [2024-11-29T20:37:54.854Z] =================================================================================================================== 00:08:22.606 [2024-11-29T20:37:54.854Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:22.606 21:37:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2893040 00:08:22.864 21:37:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:22.864 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:23.122 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16ac692d-019d-4222-815c-5d9ae74f3097 00:08:23.122 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:23.381 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:23.382 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:23.382 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.382 [2024-11-29 21:37:55.612504] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16ac692d-019d-4222-815c-5d9ae74f3097 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16ac692d-019d-4222-815c-5d9ae74f3097 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16ac692d-019d-4222-815c-5d9ae74f3097 00:08:23.641 request: 00:08:23.641 { 00:08:23.641 "uuid": "16ac692d-019d-4222-815c-5d9ae74f3097", 00:08:23.641 "method": "bdev_lvol_get_lvstores", 00:08:23.641 "req_id": 1 00:08:23.641 } 00:08:23.641 Got JSON-RPC error response 00:08:23.641 response: 00:08:23.641 { 00:08:23.641 "code": -19, 00:08:23.641 "message": "No such device" 00:08:23.641 } 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.641 21:37:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.900 aio_bdev 00:08:23.900 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a5185b1b-f78a-4ca9-8702-ffe17684b854 00:08:23.900 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=a5185b1b-f78a-4ca9-8702-ffe17684b854 00:08:23.900 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.900 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:23.900 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.900 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.900 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:24.159 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a5185b1b-f78a-4ca9-8702-ffe17684b854 -t 2000 00:08:24.159 [ 00:08:24.159 { 00:08:24.159 "name": "a5185b1b-f78a-4ca9-8702-ffe17684b854", 00:08:24.159 "aliases": [ 00:08:24.159 "lvs/lvol" 00:08:24.159 ], 00:08:24.159 "product_name": "Logical Volume", 00:08:24.159 "block_size": 4096, 00:08:24.159 "num_blocks": 38912, 00:08:24.159 "uuid": "a5185b1b-f78a-4ca9-8702-ffe17684b854", 00:08:24.159 "assigned_rate_limits": { 00:08:24.159 "rw_ios_per_sec": 0, 00:08:24.159 "rw_mbytes_per_sec": 0, 00:08:24.159 "r_mbytes_per_sec": 0, 00:08:24.159 "w_mbytes_per_sec": 0 00:08:24.159 }, 00:08:24.159 "claimed": false, 00:08:24.159 "zoned": false, 00:08:24.159 "supported_io_types": { 00:08:24.159 "read": true, 00:08:24.159 "write": true, 00:08:24.159 "unmap": true, 00:08:24.159 "flush": false, 00:08:24.159 "reset": true, 00:08:24.159 "nvme_admin": false, 00:08:24.159 "nvme_io": false, 00:08:24.159 "nvme_io_md": false, 00:08:24.159 "write_zeroes": true, 00:08:24.159 "zcopy": false, 00:08:24.159 "get_zone_info": false, 00:08:24.159 "zone_management": false, 00:08:24.159 "zone_append": false, 00:08:24.159 "compare": false, 00:08:24.159 "compare_and_write": false, 00:08:24.159 "abort": false, 00:08:24.159 "seek_hole": true, 00:08:24.159 "seek_data": true, 00:08:24.159 "copy": false, 00:08:24.159 "nvme_iov_md": false 00:08:24.159 }, 00:08:24.159 "driver_specific": { 00:08:24.159 "lvol": { 00:08:24.159 "lvol_store_uuid": "16ac692d-019d-4222-815c-5d9ae74f3097", 00:08:24.159 "base_bdev": "aio_bdev", 00:08:24.159 "thin_provision": false, 00:08:24.159 "num_allocated_clusters": 38, 00:08:24.159 "snapshot": false, 00:08:24.159 "clone": false, 00:08:24.159 "esnap_clone": false 00:08:24.159 } 00:08:24.159 } 00:08:24.159 } 00:08:24.159 ] 00:08:24.159 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:24.159 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16ac692d-019d-4222-815c-5d9ae74f3097 00:08:24.159 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:24.418 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:24.419 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16ac692d-019d-4222-815c-5d9ae74f3097 00:08:24.419 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:24.677 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:24.677 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a5185b1b-f78a-4ca9-8702-ffe17684b854 00:08:24.936 21:37:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 16ac692d-019d-4222-815c-5d9ae74f3097 00:08:24.936 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.195 00:08:25.195 real 0m15.473s 00:08:25.195 user 0m15.316s 00:08:25.195 sys 0m1.171s 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:25.195 ************************************ 00:08:25.195 END TEST lvs_grow_clean 00:08:25.195 ************************************ 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.195 ************************************ 00:08:25.195 START TEST lvs_grow_dirty 00:08:25.195 ************************************ 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.195 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.454 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:25.454 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:25.713 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b99c66df-7e83-4785-ba56-7b0840657982 00:08:25.713 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b99c66df-7e83-4785-ba56-7b0840657982 00:08:25.713 21:37:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:25.972 21:37:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:25.972 21:37:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:25.972 21:37:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b99c66df-7e83-4785-ba56-7b0840657982 lvol 150 00:08:25.972 21:37:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=01ebbc5b-d288-470e-b472-77420ac9e5ce 00:08:25.972 21:37:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.972 21:37:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:26.231 [2024-11-29 21:37:58.359440] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:26.231 [2024-11-29 21:37:58.359490] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:26.231 true 00:08:26.231 21:37:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b99c66df-7e83-4785-ba56-7b0840657982 00:08:26.231 21:37:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:26.490 21:37:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:26.490 21:37:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.749 21:37:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 01ebbc5b-d288-470e-b472-77420ac9e5ce 00:08:26.749 21:37:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:27.008 [2024-11-29 21:37:59.093852] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:27.008 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:27.267 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2895790 00:08:27.267 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:27.267 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:27.267 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2895790 /var/tmp/bdevperf.sock 00:08:27.267 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2895790 ']' 00:08:27.267 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:27.267 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.267 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:27.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:27.267 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.267 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.267 [2024-11-29 21:37:59.348037] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:27.267 [2024-11-29 21:37:59.348091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895790 ] 00:08:27.267 [2024-11-29 21:37:59.419438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.267 [2024-11-29 21:37:59.458302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.526 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.526 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:27.526 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:27.785 Nvme0n1 00:08:27.785 21:37:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:27.785 [ 00:08:27.785 { 00:08:27.785 "name": "Nvme0n1", 00:08:27.785 "aliases": [ 00:08:27.785 "01ebbc5b-d288-470e-b472-77420ac9e5ce" 00:08:27.785 ], 00:08:27.785 "product_name": "NVMe disk", 00:08:27.785 "block_size": 4096, 00:08:27.785 "num_blocks": 38912, 00:08:27.785 "uuid": "01ebbc5b-d288-470e-b472-77420ac9e5ce", 00:08:27.785 "numa_id": 1, 00:08:27.785 "assigned_rate_limits": { 00:08:27.785 "rw_ios_per_sec": 0, 00:08:27.785 "rw_mbytes_per_sec": 0, 00:08:27.785 "r_mbytes_per_sec": 0, 00:08:27.785 "w_mbytes_per_sec": 0 00:08:27.785 }, 00:08:27.785 "claimed": false, 00:08:27.785 "zoned": false, 00:08:27.785 "supported_io_types": { 00:08:27.785 "read": true, 00:08:27.785 "write": true, 00:08:27.785 "unmap": true, 00:08:27.785 "flush": true, 00:08:27.785 "reset": true, 00:08:27.785 "nvme_admin": true, 00:08:27.785 "nvme_io": true, 00:08:27.785 "nvme_io_md": false, 00:08:27.785 "write_zeroes": true, 00:08:27.785 "zcopy": false, 00:08:27.785 "get_zone_info": false, 00:08:27.785 "zone_management": false, 00:08:27.785 "zone_append": false, 00:08:27.785 "compare": true, 00:08:27.785 "compare_and_write": true, 00:08:27.785 "abort": true, 00:08:27.785 "seek_hole": false, 00:08:27.785 "seek_data": false, 00:08:27.785 "copy": true, 00:08:27.785 "nvme_iov_md": false 00:08:27.785 }, 00:08:27.785 "memory_domains": [ 00:08:27.785 { 00:08:27.785 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:27.785 "dma_device_type": 0 00:08:27.785 } 00:08:27.785 ], 00:08:27.785 "driver_specific": { 00:08:27.785 "nvme": [ 00:08:27.785 { 00:08:27.785 "trid": { 00:08:27.785 "trtype": "RDMA", 00:08:27.785 "adrfam": "IPv4", 00:08:27.785 "traddr": "192.168.100.8", 00:08:27.785 "trsvcid": "4420", 00:08:27.785 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:27.785 }, 00:08:27.785 "ctrlr_data": { 00:08:27.785 "cntlid": 1, 00:08:27.785 "vendor_id": "0x8086", 00:08:27.785 "model_number": "SPDK bdev Controller", 00:08:27.785 "serial_number": "SPDK0", 00:08:27.786 "firmware_revision": "24.09.1", 00:08:27.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:27.786 "oacs": { 00:08:27.786 "security": 0, 00:08:27.786 "format": 0, 00:08:27.786 "firmware": 0, 00:08:27.786 "ns_manage": 0 00:08:27.786 }, 00:08:27.786 "multi_ctrlr": true, 00:08:27.786 "ana_reporting": false 00:08:27.786 }, 00:08:27.786 "vs": { 00:08:27.786 "nvme_version": "1.3" 00:08:27.786 }, 00:08:27.786 "ns_data": { 00:08:27.786 "id": 1, 00:08:27.786 "can_share": true 00:08:27.786 } 00:08:27.786 } 00:08:27.786 ], 00:08:27.786 "mp_policy": "active_passive" 00:08:27.786 } 00:08:27.786 } 00:08:27.786 ] 00:08:27.786 21:38:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2896038 00:08:27.786 21:38:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:27.786 21:38:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:28.045 Running I/O for 10 seconds... 00:08:28.981 Latency(us) 00:08:28.981 [2024-11-29T20:38:01.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.981 Nvme0n1 : 1.00 34240.00 133.75 0.00 0.00 0.00 0.00 0.00 00:08:28.981 [2024-11-29T20:38:01.229Z] =================================================================================================================== 00:08:28.981 [2024-11-29T20:38:01.229Z] Total : 34240.00 133.75 0.00 0.00 0.00 0.00 0.00 00:08:28.981 00:08:29.917 21:38:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b99c66df-7e83-4785-ba56-7b0840657982 00:08:29.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.917 Nvme0n1 : 2.00 34898.00 136.32 0.00 0.00 0.00 0.00 0.00 00:08:29.917 [2024-11-29T20:38:02.166Z] =================================================================================================================== 00:08:29.918 [2024-11-29T20:38:02.166Z] Total : 34898.00 136.32 0.00 0.00 0.00 0.00 0.00 00:08:29.918 00:08:30.177 true 00:08:30.177 21:38:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b99c66df-7e83-4785-ba56-7b0840657982 00:08:30.177 21:38:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:30.177 21:38:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:30.177 21:38:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:30.177 21:38:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2896038 00:08:31.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.116 Nvme0n1 : 3.00 35114.33 137.17 0.00 0.00 0.00 0.00 0.00 00:08:31.116 [2024-11-29T20:38:03.364Z] =================================================================================================================== 00:08:31.116 [2024-11-29T20:38:03.364Z] Total : 35114.33 137.17 0.00 0.00 0.00 0.00 0.00 00:08:31.116 00:08:32.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.111 Nvme0n1 : 4.00 35255.25 137.72 0.00 0.00 0.00 0.00 0.00 00:08:32.111 [2024-11-29T20:38:04.359Z] =================================================================================================================== 00:08:32.111 [2024-11-29T20:38:04.359Z] Total : 35255.25 137.72 0.00 0.00 0.00 0.00 0.00 00:08:32.111 00:08:33.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.045 Nvme0n1 : 5.00 35374.20 138.18 0.00 0.00 0.00 0.00 0.00 00:08:33.045 [2024-11-29T20:38:05.293Z] =================================================================================================================== 00:08:33.045 [2024-11-29T20:38:05.293Z] Total : 35374.20 138.18 0.00 0.00 0.00 0.00 0.00 00:08:33.045 00:08:33.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.980 Nvme0n1 : 6.00 35451.83 138.48 0.00 0.00 0.00 0.00 0.00 00:08:33.980 [2024-11-29T20:38:06.228Z] =================================================================================================================== 00:08:33.980 [2024-11-29T20:38:06.228Z] Total : 35451.83 138.48 0.00 0.00 0.00 0.00 0.00 00:08:33.980 00:08:34.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.918 Nvme0n1 : 7.00 35525.43 138.77 0.00 0.00 0.00 0.00 0.00 00:08:34.918 [2024-11-29T20:38:07.166Z] =================================================================================================================== 00:08:34.918 [2024-11-29T20:38:07.166Z] Total : 35525.43 138.77 0.00 0.00 0.00 0.00 0.00 00:08:34.918 00:08:36.294 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.294 Nvme0n1 : 8.00 35575.62 138.97 0.00 0.00 0.00 0.00 0.00 00:08:36.294 [2024-11-29T20:38:08.542Z] =================================================================================================================== 00:08:36.294 [2024-11-29T20:38:08.542Z] Total : 35575.62 138.97 0.00 0.00 0.00 0.00 0.00 00:08:36.294 00:08:37.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.231 Nvme0n1 : 9.00 35606.11 139.09 0.00 0.00 0.00 0.00 0.00 00:08:37.231 [2024-11-29T20:38:09.479Z] =================================================================================================================== 00:08:37.231 [2024-11-29T20:38:09.479Z] Total : 35606.11 139.09 0.00 0.00 0.00 0.00 0.00 00:08:37.231 00:08:38.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.167 Nvme0n1 : 10.00 35625.00 139.16 0.00 0.00 0.00 0.00 0.00 00:08:38.167 [2024-11-29T20:38:10.415Z] =================================================================================================================== 00:08:38.167 [2024-11-29T20:38:10.415Z] Total : 35625.00 139.16 0.00 0.00 0.00 0.00 0.00 00:08:38.167 00:08:38.167 00:08:38.167 Latency(us) 00:08:38.167 [2024-11-29T20:38:10.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.167 Nvme0n1 : 10.00 35624.91 139.16 0.00 0.00 3590.23 2660.76 15414.07 00:08:38.167 [2024-11-29T20:38:10.415Z] =================================================================================================================== 00:08:38.167 [2024-11-29T20:38:10.415Z] Total : 35624.91 139.16 0.00 0.00 3590.23 2660.76 15414.07 00:08:38.167 { 00:08:38.167 "results": [ 00:08:38.167 { 00:08:38.167 "job": "Nvme0n1", 00:08:38.167 "core_mask": "0x2", 00:08:38.167 "workload": "randwrite", 00:08:38.167 "status": "finished", 00:08:38.167 "queue_depth": 128, 00:08:38.167 "io_size": 4096, 00:08:38.167 "runtime": 10.003057, 00:08:38.167 "iops": 35624.9094651765, 00:08:38.167 "mibps": 139.1598025983457, 00:08:38.167 "io_failed": 0, 00:08:38.167 "io_timeout": 0, 00:08:38.167 "avg_latency_us": 3590.2294813193475, 00:08:38.167 "min_latency_us": 2660.7616, 00:08:38.167 "max_latency_us": 15414.0672 00:08:38.167 } 00:08:38.167 ], 00:08:38.167 "core_count": 1 00:08:38.167 } 00:08:38.167 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2895790 00:08:38.167 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2895790 ']' 00:08:38.167 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2895790 00:08:38.167 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:38.167 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.167 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2895790 00:08:38.167 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:38.167 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:38.167 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2895790' 00:08:38.167 killing process with pid 2895790 00:08:38.167 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2895790 00:08:38.168 Received shutdown signal, test time was about 10.000000 seconds 00:08:38.168 00:08:38.168 Latency(us) 00:08:38.168 [2024-11-29T20:38:10.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.168 [2024-11-29T20:38:10.416Z] =================================================================================================================== 00:08:38.168 [2024-11-29T20:38:10.416Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:38.168 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2895790 00:08:38.426 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:38.426 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:38.684 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b99c66df-7e83-4785-ba56-7b0840657982 00:08:38.684 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:38.944 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:38.944 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:38.944 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2892713 00:08:38.944 21:38:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2892713 00:08:38.944 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2892713 Killed "${NVMF_APP[@]}" "$@" 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=2897917 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 2897917 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2897917 ']' 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.944 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.944 [2024-11-29 21:38:11.088784] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:38.944 [2024-11-29 21:38:11.088839] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.944 [2024-11-29 21:38:11.158953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.203 [2024-11-29 21:38:11.197211] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.203 [2024-11-29 21:38:11.197247] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.203 [2024-11-29 21:38:11.197256] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.203 [2024-11-29 21:38:11.197264] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.203 [2024-11-29 21:38:11.197271] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.203 [2024-11-29 21:38:11.197291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.203 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.203 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:39.203 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:39.203 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.203 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:39.203 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.203 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.462 [2024-11-29 21:38:11.493638] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:39.462 [2024-11-29 21:38:11.493774] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:39.462 [2024-11-29 21:38:11.493801] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:39.462 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:39.462 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 01ebbc5b-d288-470e-b472-77420ac9e5ce 00:08:39.462 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=01ebbc5b-d288-470e-b472-77420ac9e5ce 00:08:39.462 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.462 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:39.462 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.462 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.462 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:39.462 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 01ebbc5b-d288-470e-b472-77420ac9e5ce -t 2000 00:08:39.721 [ 00:08:39.721 { 00:08:39.721 "name": "01ebbc5b-d288-470e-b472-77420ac9e5ce", 00:08:39.721 "aliases": [ 00:08:39.721 "lvs/lvol" 00:08:39.721 ], 00:08:39.721 "product_name": "Logical Volume", 00:08:39.721 "block_size": 4096, 00:08:39.721 "num_blocks": 38912, 00:08:39.721 "uuid": "01ebbc5b-d288-470e-b472-77420ac9e5ce", 00:08:39.721 "assigned_rate_limits": { 00:08:39.721 "rw_ios_per_sec": 0, 00:08:39.721 "rw_mbytes_per_sec": 0, 00:08:39.721 "r_mbytes_per_sec": 0, 00:08:39.721 "w_mbytes_per_sec": 0 00:08:39.721 }, 00:08:39.721 "claimed": false, 00:08:39.721 "zoned": false, 00:08:39.721 "supported_io_types": { 00:08:39.721 "read": true, 00:08:39.721 "write": true, 00:08:39.721 "unmap": true, 00:08:39.721 "flush": false, 00:08:39.721 "reset": true, 00:08:39.721 "nvme_admin": false, 00:08:39.721 "nvme_io": false, 00:08:39.721 "nvme_io_md": false, 00:08:39.721 "write_zeroes": true, 00:08:39.721 "zcopy": false, 00:08:39.721 "get_zone_info": false, 00:08:39.721 "zone_management": false, 00:08:39.721 "zone_append": false, 00:08:39.721 "compare": false, 00:08:39.721 "compare_and_write": false, 00:08:39.721 "abort": false, 00:08:39.721 "seek_hole": true, 00:08:39.721 "seek_data": true, 00:08:39.721 "copy": false, 00:08:39.721 "nvme_iov_md": false 00:08:39.721 }, 00:08:39.721 "driver_specific": { 00:08:39.721 "lvol": { 00:08:39.721 "lvol_store_uuid": "b99c66df-7e83-4785-ba56-7b0840657982", 00:08:39.721 "base_bdev": "aio_bdev", 00:08:39.721 "thin_provision": false, 00:08:39.721 "num_allocated_clusters": 38, 00:08:39.721 "snapshot": false, 00:08:39.721 "clone": false, 00:08:39.721 "esnap_clone": false 00:08:39.721 } 00:08:39.721 } 00:08:39.721 } 00:08:39.721 ] 00:08:39.721 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:39.721 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b99c66df-7e83-4785-ba56-7b0840657982 00:08:39.721 21:38:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:39.980 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:39.980 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b99c66df-7e83-4785-ba56-7b0840657982 00:08:39.980 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:40.239 [2024-11-29 21:38:12.426337] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b99c66df-7e83-4785-ba56-7b0840657982 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b99c66df-7e83-4785-ba56-7b0840657982 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:40.239 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b99c66df-7e83-4785-ba56-7b0840657982 00:08:40.498 request: 00:08:40.498 { 00:08:40.498 "uuid": "b99c66df-7e83-4785-ba56-7b0840657982", 00:08:40.498 "method": "bdev_lvol_get_lvstores", 00:08:40.498 "req_id": 1 00:08:40.498 } 00:08:40.498 Got JSON-RPC error response 00:08:40.498 response: 00:08:40.498 { 00:08:40.498 "code": -19, 00:08:40.498 "message": "No such device" 00:08:40.498 } 00:08:40.498 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:40.498 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:40.498 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:40.498 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:40.498 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.757 aio_bdev 00:08:40.757 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 01ebbc5b-d288-470e-b472-77420ac9e5ce 00:08:40.757 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=01ebbc5b-d288-470e-b472-77420ac9e5ce 00:08:40.757 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.757 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:40.757 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.757 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.757 21:38:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:41.016 21:38:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 01ebbc5b-d288-470e-b472-77420ac9e5ce -t 2000 00:08:41.016 [ 00:08:41.016 { 00:08:41.016 "name": "01ebbc5b-d288-470e-b472-77420ac9e5ce", 00:08:41.016 "aliases": [ 00:08:41.016 "lvs/lvol" 00:08:41.016 ], 00:08:41.016 "product_name": "Logical Volume", 00:08:41.016 "block_size": 4096, 00:08:41.016 "num_blocks": 38912, 00:08:41.016 "uuid": "01ebbc5b-d288-470e-b472-77420ac9e5ce", 00:08:41.016 "assigned_rate_limits": { 00:08:41.016 "rw_ios_per_sec": 0, 00:08:41.016 "rw_mbytes_per_sec": 0, 00:08:41.016 "r_mbytes_per_sec": 0, 00:08:41.016 "w_mbytes_per_sec": 0 00:08:41.016 }, 00:08:41.016 "claimed": false, 00:08:41.017 "zoned": false, 00:08:41.017 "supported_io_types": { 00:08:41.017 "read": true, 00:08:41.017 "write": true, 00:08:41.017 "unmap": true, 00:08:41.017 "flush": false, 00:08:41.017 "reset": true, 00:08:41.017 "nvme_admin": false, 00:08:41.017 "nvme_io": false, 00:08:41.017 "nvme_io_md": false, 00:08:41.017 "write_zeroes": true, 00:08:41.017 "zcopy": false, 00:08:41.017 "get_zone_info": false, 00:08:41.017 "zone_management": false, 00:08:41.017 "zone_append": false, 00:08:41.017 "compare": false, 00:08:41.017 "compare_and_write": false, 00:08:41.017 "abort": false, 00:08:41.017 "seek_hole": true, 00:08:41.017 "seek_data": true, 00:08:41.017 "copy": false, 00:08:41.017 "nvme_iov_md": false 00:08:41.017 }, 00:08:41.017 "driver_specific": { 00:08:41.017 "lvol": { 00:08:41.017 "lvol_store_uuid": "b99c66df-7e83-4785-ba56-7b0840657982", 00:08:41.017 "base_bdev": "aio_bdev", 00:08:41.017 "thin_provision": false, 00:08:41.017 "num_allocated_clusters": 38, 00:08:41.017 "snapshot": false, 00:08:41.017 "clone": false, 00:08:41.017 "esnap_clone": false 00:08:41.017 } 00:08:41.017 } 00:08:41.017 } 00:08:41.017 ] 00:08:41.017 21:38:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:41.017 21:38:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b99c66df-7e83-4785-ba56-7b0840657982 00:08:41.017 21:38:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:41.275 21:38:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:41.275 21:38:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:41.275 21:38:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b99c66df-7e83-4785-ba56-7b0840657982 00:08:41.534 21:38:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:41.534 21:38:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 01ebbc5b-d288-470e-b472-77420ac9e5ce 00:08:41.534 21:38:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b99c66df-7e83-4785-ba56-7b0840657982 00:08:41.792 21:38:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:42.051 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.051 00:08:42.051 real 0m16.786s 00:08:42.051 user 0m43.901s 00:08:42.051 sys 0m3.307s 00:08:42.051 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.051 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:42.051 ************************************ 00:08:42.051 END TEST lvs_grow_dirty 00:08:42.051 ************************************ 00:08:42.051 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:42.051 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:42.051 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:42.051 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:42.051 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:42.051 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:42.051 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:42.051 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:42.051 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:42.051 nvmf_trace.0 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:42.310 rmmod nvme_rdma 00:08:42.310 rmmod nvme_fabrics 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 2897917 ']' 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 2897917 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2897917 ']' 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2897917 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2897917 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2897917' 00:08:42.310 killing process with pid 2897917 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2897917 00:08:42.310 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2897917 00:08:42.569 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:42.569 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:08:42.569 00:08:42.569 real 0m39.842s 00:08:42.569 user 1m4.782s 00:08:42.569 sys 0m9.760s 00:08:42.569 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.569 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:42.569 ************************************ 00:08:42.569 END TEST nvmf_lvs_grow 00:08:42.569 ************************************ 00:08:42.569 21:38:14 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:42.569 21:38:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:42.569 21:38:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.569 21:38:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.569 ************************************ 00:08:42.569 START TEST nvmf_bdev_io_wait 00:08:42.569 ************************************ 00:08:42.569 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:42.569 * Looking for test storage... 00:08:42.569 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:42.569 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:42.569 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:08:42.569 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:42.829 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:42.829 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.829 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.829 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.829 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.829 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.829 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.829 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.829 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:42.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.830 --rc genhtml_branch_coverage=1 00:08:42.830 --rc genhtml_function_coverage=1 00:08:42.830 --rc genhtml_legend=1 00:08:42.830 --rc geninfo_all_blocks=1 00:08:42.830 --rc geninfo_unexecuted_blocks=1 00:08:42.830 00:08:42.830 ' 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:42.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.830 --rc genhtml_branch_coverage=1 00:08:42.830 --rc genhtml_function_coverage=1 00:08:42.830 --rc genhtml_legend=1 00:08:42.830 --rc geninfo_all_blocks=1 00:08:42.830 --rc geninfo_unexecuted_blocks=1 00:08:42.830 00:08:42.830 ' 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:42.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.830 --rc genhtml_branch_coverage=1 00:08:42.830 --rc genhtml_function_coverage=1 00:08:42.830 --rc genhtml_legend=1 00:08:42.830 --rc geninfo_all_blocks=1 00:08:42.830 --rc geninfo_unexecuted_blocks=1 00:08:42.830 00:08:42.830 ' 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:42.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.830 --rc genhtml_branch_coverage=1 00:08:42.830 --rc genhtml_function_coverage=1 00:08:42.830 --rc genhtml_legend=1 00:08:42.830 --rc geninfo_all_blocks=1 00:08:42.830 --rc geninfo_unexecuted_blocks=1 00:08:42.830 00:08:42.830 ' 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.830 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:42.830 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.831 21:38:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:49.395 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:49.395 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:49.395 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:49.395 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # rdma_device_init 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@526 -- # allocate_nic_ips 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:49.395 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:49.396 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:49.396 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:49.396 altname enp217s0f0np0 00:08:49.396 altname ens818f0np0 00:08:49.396 inet 192.168.100.8/24 scope global mlx_0_0 00:08:49.396 valid_lft forever preferred_lft forever 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:49.396 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:49.396 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:49.396 altname enp217s0f1np1 00:08:49.396 altname ens818f1np1 00:08:49.396 inet 192.168.100.9/24 scope global mlx_0_1 00:08:49.396 valid_lft forever preferred_lft forever 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:49.396 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:08:49.656 192.168.100.9' 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:08:49.656 192.168.100.9' 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # head -n 1 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:08:49.656 192.168.100.9' 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # head -n 1 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # tail -n +2 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=2901950 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 2901950 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2901950 ']' 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.656 21:38:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.656 [2024-11-29 21:38:21.797613] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:49.656 [2024-11-29 21:38:21.797661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.656 [2024-11-29 21:38:21.868381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.916 [2024-11-29 21:38:21.909884] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.916 [2024-11-29 21:38:21.909924] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.916 [2024-11-29 21:38:21.909933] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.916 [2024-11-29 21:38:21.909942] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.916 [2024-11-29 21:38:21.909949] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.916 [2024-11-29 21:38:21.909995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.916 [2024-11-29 21:38:21.910019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.916 [2024-11-29 21:38:21.910051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.916 [2024-11-29 21:38:21.910053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.483 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.483 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:50.483 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:50.483 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:50.483 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.483 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.483 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:50.483 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.483 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.483 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.483 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:50.483 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.483 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.743 [2024-11-29 21:38:22.779131] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11f7e90/0x11fc340) succeed. 00:08:50.743 [2024-11-29 21:38:22.789402] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11f9480/0x123d9e0) succeed. 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.743 Malloc0 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.743 [2024-11-29 21:38:22.968985] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2902136 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2902139 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:50.743 { 00:08:50.743 "params": { 00:08:50.743 "name": "Nvme$subsystem", 00:08:50.743 "trtype": "$TEST_TRANSPORT", 00:08:50.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.743 "adrfam": "ipv4", 00:08:50.743 "trsvcid": "$NVMF_PORT", 00:08:50.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.743 "hdgst": ${hdgst:-false}, 00:08:50.743 "ddgst": ${ddgst:-false} 00:08:50.743 }, 00:08:50.743 "method": "bdev_nvme_attach_controller" 00:08:50.743 } 00:08:50.743 EOF 00:08:50.743 )") 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2902141 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:50.743 { 00:08:50.743 "params": { 00:08:50.743 "name": "Nvme$subsystem", 00:08:50.743 "trtype": "$TEST_TRANSPORT", 00:08:50.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.743 "adrfam": "ipv4", 00:08:50.743 "trsvcid": "$NVMF_PORT", 00:08:50.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.743 "hdgst": ${hdgst:-false}, 00:08:50.743 "ddgst": ${ddgst:-false} 00:08:50.743 }, 00:08:50.743 "method": "bdev_nvme_attach_controller" 00:08:50.743 } 00:08:50.743 EOF 00:08:50.743 )") 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2902145 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:50.743 { 00:08:50.743 "params": { 00:08:50.743 "name": "Nvme$subsystem", 00:08:50.743 "trtype": "$TEST_TRANSPORT", 00:08:50.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.743 "adrfam": "ipv4", 00:08:50.743 "trsvcid": "$NVMF_PORT", 00:08:50.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.743 "hdgst": ${hdgst:-false}, 00:08:50.743 "ddgst": ${ddgst:-false} 00:08:50.743 }, 00:08:50.743 "method": "bdev_nvme_attach_controller" 00:08:50.743 } 00:08:50.743 EOF 00:08:50.743 )") 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:50.743 { 00:08:50.743 "params": { 00:08:50.743 "name": "Nvme$subsystem", 00:08:50.743 "trtype": "$TEST_TRANSPORT", 00:08:50.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.743 "adrfam": "ipv4", 00:08:50.743 "trsvcid": "$NVMF_PORT", 00:08:50.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.743 "hdgst": ${hdgst:-false}, 00:08:50.743 "ddgst": ${ddgst:-false} 00:08:50.743 }, 00:08:50.743 "method": "bdev_nvme_attach_controller" 00:08:50.743 } 00:08:50.743 EOF 00:08:50.743 )") 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2902136 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:50.743 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:51.003 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:51.003 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:51.003 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:51.003 "params": { 00:08:51.003 "name": "Nvme1", 00:08:51.003 "trtype": "rdma", 00:08:51.003 "traddr": "192.168.100.8", 00:08:51.003 "adrfam": "ipv4", 00:08:51.003 "trsvcid": "4420", 00:08:51.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.003 "hdgst": false, 00:08:51.003 "ddgst": false 00:08:51.003 }, 00:08:51.003 "method": "bdev_nvme_attach_controller" 00:08:51.003 }' 00:08:51.003 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:51.003 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:51.003 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:51.003 "params": { 00:08:51.003 "name": "Nvme1", 00:08:51.003 "trtype": "rdma", 00:08:51.003 "traddr": "192.168.100.8", 00:08:51.003 "adrfam": "ipv4", 00:08:51.003 "trsvcid": "4420", 00:08:51.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.003 "hdgst": false, 00:08:51.003 "ddgst": false 00:08:51.003 }, 00:08:51.003 "method": "bdev_nvme_attach_controller" 00:08:51.003 }' 00:08:51.003 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:51.003 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:51.003 "params": { 00:08:51.003 "name": "Nvme1", 00:08:51.003 "trtype": "rdma", 00:08:51.003 "traddr": "192.168.100.8", 00:08:51.003 "adrfam": "ipv4", 00:08:51.003 "trsvcid": "4420", 00:08:51.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.003 "hdgst": false, 00:08:51.003 "ddgst": false 00:08:51.003 }, 00:08:51.003 "method": "bdev_nvme_attach_controller" 00:08:51.003 }' 00:08:51.003 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:51.003 21:38:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:51.003 "params": { 00:08:51.003 "name": "Nvme1", 00:08:51.003 "trtype": "rdma", 00:08:51.003 "traddr": "192.168.100.8", 00:08:51.003 "adrfam": "ipv4", 00:08:51.003 "trsvcid": "4420", 00:08:51.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.003 "hdgst": false, 00:08:51.003 "ddgst": false 00:08:51.003 }, 00:08:51.003 "method": "bdev_nvme_attach_controller" 00:08:51.003 }' 00:08:51.004 [2024-11-29 21:38:23.020729] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:51.004 [2024-11-29 21:38:23.020782] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:51.004 [2024-11-29 21:38:23.021729] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:51.004 [2024-11-29 21:38:23.021777] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:51.004 [2024-11-29 21:38:23.021845] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:51.004 [2024-11-29 21:38:23.021901] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:51.004 [2024-11-29 21:38:23.024379] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:51.004 [2024-11-29 21:38:23.024430] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:51.004 [2024-11-29 21:38:23.185789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.004 [2024-11-29 21:38:23.210348] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:08:51.004 [2024-11-29 21:38:23.237731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.263 [2024-11-29 21:38:23.261909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:08:51.263 [2024-11-29 21:38:23.333992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.263 [2024-11-29 21:38:23.359298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:51.263 [2024-11-29 21:38:23.428748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.263 [2024-11-29 21:38:23.461421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:08:51.523 Running I/O for 1 seconds... 00:08:51.523 Running I/O for 1 seconds... 00:08:51.782 Running I/O for 1 seconds... 00:08:51.782 Running I/O for 1 seconds... 00:08:52.719 19108.00 IOPS, 74.64 MiB/s 00:08:52.719 Latency(us) 00:08:52.719 [2024-11-29T20:38:24.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.719 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:52.719 Nvme1n1 : 1.01 19135.61 74.75 0.00 0.00 6668.58 4299.16 13946.06 00:08:52.719 [2024-11-29T20:38:24.967Z] =================================================================================================================== 00:08:52.719 [2024-11-29T20:38:24.967Z] Total : 19135.61 74.75 0.00 0.00 6668.58 4299.16 13946.06 00:08:52.719 263976.00 IOPS, 1031.16 MiB/s 00:08:52.720 Latency(us) 00:08:52.720 [2024-11-29T20:38:24.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.720 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:52.720 Nvme1n1 : 1.00 263585.99 1029.63 0.00 0.00 483.23 224.46 2319.97 00:08:52.720 [2024-11-29T20:38:24.968Z] =================================================================================================================== 00:08:52.720 [2024-11-29T20:38:24.968Z] Total : 263585.99 1029.63 0.00 0.00 483.23 224.46 2319.97 00:08:52.720 15400.00 IOPS, 60.16 MiB/s 00:08:52.720 Latency(us) 00:08:52.720 [2024-11-29T20:38:24.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.720 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:52.720 Nvme1n1 : 1.01 15448.65 60.35 0.00 0.00 8258.16 4692.38 15938.36 00:08:52.720 [2024-11-29T20:38:24.968Z] =================================================================================================================== 00:08:52.720 [2024-11-29T20:38:24.968Z] Total : 15448.65 60.35 0.00 0.00 8258.16 4692.38 15938.36 00:08:52.720 18612.00 IOPS, 72.70 MiB/s 00:08:52.720 Latency(us) 00:08:52.720 [2024-11-29T20:38:24.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.720 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:52.720 Nvme1n1 : 1.01 18701.10 73.05 0.00 0.00 6829.40 2660.76 17616.08 00:08:52.720 [2024-11-29T20:38:24.968Z] =================================================================================================================== 00:08:52.720 [2024-11-29T20:38:24.968Z] Total : 18701.10 73.05 0.00 0.00 6829.40 2660.76 17616.08 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2902139 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2902141 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2902145 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:52.979 rmmod nvme_rdma 00:08:52.979 rmmod nvme_fabrics 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 2901950 ']' 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 2901950 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2901950 ']' 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2901950 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.979 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2901950 00:08:53.238 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.238 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.238 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2901950' 00:08:53.238 killing process with pid 2901950 00:08:53.238 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2901950 00:08:53.238 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2901950 00:08:53.497 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:53.497 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:08:53.497 00:08:53.497 real 0m10.825s 00:08:53.497 user 0m21.713s 00:08:53.497 sys 0m6.827s 00:08:53.497 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.497 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.497 ************************************ 00:08:53.497 END TEST nvmf_bdev_io_wait 00:08:53.497 ************************************ 00:08:53.497 21:38:25 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:53.497 21:38:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.497 21:38:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.497 21:38:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.497 ************************************ 00:08:53.497 START TEST nvmf_queue_depth 00:08:53.497 ************************************ 00:08:53.497 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:53.497 * Looking for test storage... 00:08:53.497 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:53.497 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:53.497 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:08:53.497 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.758 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.759 --rc genhtml_branch_coverage=1 00:08:53.759 --rc genhtml_function_coverage=1 00:08:53.759 --rc genhtml_legend=1 00:08:53.759 --rc geninfo_all_blocks=1 00:08:53.759 --rc geninfo_unexecuted_blocks=1 00:08:53.759 00:08:53.759 ' 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.759 --rc genhtml_branch_coverage=1 00:08:53.759 --rc genhtml_function_coverage=1 00:08:53.759 --rc genhtml_legend=1 00:08:53.759 --rc geninfo_all_blocks=1 00:08:53.759 --rc geninfo_unexecuted_blocks=1 00:08:53.759 00:08:53.759 ' 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.759 --rc genhtml_branch_coverage=1 00:08:53.759 --rc genhtml_function_coverage=1 00:08:53.759 --rc genhtml_legend=1 00:08:53.759 --rc geninfo_all_blocks=1 00:08:53.759 --rc geninfo_unexecuted_blocks=1 00:08:53.759 00:08:53.759 ' 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.759 --rc genhtml_branch_coverage=1 00:08:53.759 --rc genhtml_function_coverage=1 00:08:53.759 --rc genhtml_legend=1 00:08:53.759 --rc geninfo_all_blocks=1 00:08:53.759 --rc geninfo_unexecuted_blocks=1 00:08:53.759 00:08:53.759 ' 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.759 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.760 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.760 21:38:25 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:00.329 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:00.329 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:00.330 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:00.330 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:00.330 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # rdma_device_init 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@526 -- # allocate_nic_ips 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:00.330 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:00.330 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:00.330 altname enp217s0f0np0 00:09:00.330 altname ens818f0np0 00:09:00.330 inet 192.168.100.8/24 scope global mlx_0_0 00:09:00.330 valid_lft forever preferred_lft forever 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:00.330 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:00.330 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:00.330 altname enp217s0f1np1 00:09:00.330 altname ens818f1np1 00:09:00.330 inet 192.168.100.9/24 scope global mlx_0_1 00:09:00.330 valid_lft forever preferred_lft forever 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:00.330 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.331 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:00.331 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:00.331 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:00.331 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:00.331 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:00.331 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:00.331 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:00.331 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:00.331 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:00.331 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:00.331 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:00.590 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:00.590 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:00.590 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:00.590 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:00.590 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:09:00.591 192.168.100.9' 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:09:00.591 192.168.100.9' 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # head -n 1 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:09:00.591 192.168.100.9' 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # tail -n +2 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # head -n 1 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=2905973 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 2905973 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2905973 ']' 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.591 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.591 [2024-11-29 21:38:32.692604] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:00.591 [2024-11-29 21:38:32.692658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.591 [2024-11-29 21:38:32.766061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.591 [2024-11-29 21:38:32.804306] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.591 [2024-11-29 21:38:32.804347] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.591 [2024-11-29 21:38:32.804356] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.591 [2024-11-29 21:38:32.804365] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.591 [2024-11-29 21:38:32.804372] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.591 [2024-11-29 21:38:32.804393] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.850 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.850 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:00.850 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:00.850 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:00.850 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.850 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.850 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:00.850 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.850 21:38:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.850 [2024-11-29 21:38:32.967005] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b3b290/0x1b3f740) succeed. 00:09:00.850 [2024-11-29 21:38:32.975735] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b3c740/0x1b80de0) succeed. 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.850 Malloc0 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.850 [2024-11-29 21:38:33.062958] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:00.850 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.851 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2905998 00:09:00.851 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:00.851 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:00.851 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2905998 /var/tmp/bdevperf.sock 00:09:00.851 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2905998 ']' 00:09:00.851 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:00.851 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.851 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:00.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:00.851 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.851 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.110 [2024-11-29 21:38:33.111180] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:01.110 [2024-11-29 21:38:33.111228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905998 ] 00:09:01.110 [2024-11-29 21:38:33.180606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.110 [2024-11-29 21:38:33.218871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.110 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.110 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:01.110 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:01.110 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.110 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.369 NVMe0n1 00:09:01.369 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.369 21:38:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:01.369 Running I/O for 10 seconds... 00:09:03.682 17408.00 IOPS, 68.00 MiB/s [2024-11-29T20:38:36.866Z] 17920.00 IOPS, 70.00 MiB/s [2024-11-29T20:38:37.802Z] 17990.00 IOPS, 70.27 MiB/s [2024-11-29T20:38:38.738Z] 18001.25 IOPS, 70.32 MiB/s [2024-11-29T20:38:39.675Z] 18022.40 IOPS, 70.40 MiB/s [2024-11-29T20:38:40.610Z] 18090.67 IOPS, 70.67 MiB/s [2024-11-29T20:38:41.543Z] 18018.00 IOPS, 70.38 MiB/s [2024-11-29T20:38:42.540Z] 18048.00 IOPS, 70.50 MiB/s [2024-11-29T20:38:43.917Z] 18056.89 IOPS, 70.53 MiB/s [2024-11-29T20:38:43.917Z] 18051.60 IOPS, 70.51 MiB/s 00:09:11.669 Latency(us) 00:09:11.669 [2024-11-29T20:38:43.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.669 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:11.669 Verification LBA range: start 0x0 length 0x4000 00:09:11.669 NVMe0n1 : 10.03 18083.09 70.64 0.00 0.00 56477.04 3001.55 36490.44 00:09:11.669 [2024-11-29T20:38:43.917Z] =================================================================================================================== 00:09:11.669 [2024-11-29T20:38:43.917Z] Total : 18083.09 70.64 0.00 0.00 56477.04 3001.55 36490.44 00:09:11.669 { 00:09:11.669 "results": [ 00:09:11.669 { 00:09:11.669 "job": "NVMe0n1", 00:09:11.669 "core_mask": "0x1", 00:09:11.669 "workload": "verify", 00:09:11.669 "status": "finished", 00:09:11.669 "verify_range": { 00:09:11.669 "start": 0, 00:09:11.669 "length": 16384 00:09:11.669 }, 00:09:11.669 "queue_depth": 1024, 00:09:11.669 "io_size": 4096, 00:09:11.669 "runtime": 10.03075, 00:09:11.669 "iops": 18083.094484460285, 00:09:11.669 "mibps": 70.63708782992299, 00:09:11.669 "io_failed": 0, 00:09:11.669 "io_timeout": 0, 00:09:11.669 "avg_latency_us": 56477.037415805986, 00:09:11.669 "min_latency_us": 3001.5488, 00:09:11.669 "max_latency_us": 36490.4448 00:09:11.669 } 00:09:11.669 ], 00:09:11.669 "core_count": 1 00:09:11.669 } 00:09:11.669 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2905998 00:09:11.669 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2905998 ']' 00:09:11.669 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2905998 00:09:11.669 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:11.669 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.669 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2905998 00:09:11.669 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.669 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.669 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2905998' 00:09:11.669 killing process with pid 2905998 00:09:11.669 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2905998 00:09:11.669 Received shutdown signal, test time was about 10.000000 seconds 00:09:11.669 00:09:11.669 Latency(us) 00:09:11.669 [2024-11-29T20:38:43.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.669 [2024-11-29T20:38:43.918Z] =================================================================================================================== 00:09:11.670 [2024-11-29T20:38:43.918Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2905998 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:11.670 rmmod nvme_rdma 00:09:11.670 rmmod nvme_fabrics 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 2905973 ']' 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 2905973 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2905973 ']' 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2905973 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.670 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2905973 00:09:11.929 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:11.929 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:11.929 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2905973' 00:09:11.929 killing process with pid 2905973 00:09:11.929 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2905973 00:09:11.929 21:38:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2905973 00:09:12.188 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:12.188 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:09:12.188 00:09:12.188 real 0m18.601s 00:09:12.188 user 0m24.309s 00:09:12.188 sys 0m5.893s 00:09:12.188 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.188 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:12.188 ************************************ 00:09:12.188 END TEST nvmf_queue_depth 00:09:12.188 ************************************ 00:09:12.188 21:38:44 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:12.188 21:38:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:12.188 21:38:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.188 21:38:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.188 ************************************ 00:09:12.188 START TEST nvmf_target_multipath 00:09:12.188 ************************************ 00:09:12.189 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:12.189 * Looking for test storage... 00:09:12.189 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:12.189 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:12.189 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:12.189 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:12.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.449 --rc genhtml_branch_coverage=1 00:09:12.449 --rc genhtml_function_coverage=1 00:09:12.449 --rc genhtml_legend=1 00:09:12.449 --rc geninfo_all_blocks=1 00:09:12.449 --rc geninfo_unexecuted_blocks=1 00:09:12.449 00:09:12.449 ' 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:12.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.449 --rc genhtml_branch_coverage=1 00:09:12.449 --rc genhtml_function_coverage=1 00:09:12.449 --rc genhtml_legend=1 00:09:12.449 --rc geninfo_all_blocks=1 00:09:12.449 --rc geninfo_unexecuted_blocks=1 00:09:12.449 00:09:12.449 ' 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:12.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.449 --rc genhtml_branch_coverage=1 00:09:12.449 --rc genhtml_function_coverage=1 00:09:12.449 --rc genhtml_legend=1 00:09:12.449 --rc geninfo_all_blocks=1 00:09:12.449 --rc geninfo_unexecuted_blocks=1 00:09:12.449 00:09:12.449 ' 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:12.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.449 --rc genhtml_branch_coverage=1 00:09:12.449 --rc genhtml_function_coverage=1 00:09:12.449 --rc genhtml_legend=1 00:09:12.449 --rc geninfo_all_blocks=1 00:09:12.449 --rc geninfo_unexecuted_blocks=1 00:09:12.449 00:09:12.449 ' 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.449 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.450 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:12.450 21:38:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:19.025 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:19.025 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:19.025 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:19.026 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:19.026 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # rdma_device_init 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@526 -- # allocate_nic_ips 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:19.026 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:19.027 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:19.027 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:19.027 altname enp217s0f0np0 00:09:19.027 altname ens818f0np0 00:09:19.027 inet 192.168.100.8/24 scope global mlx_0_0 00:09:19.027 valid_lft forever preferred_lft forever 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:19.027 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:19.027 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:19.027 altname enp217s0f1np1 00:09:19.027 altname ens818f1np1 00:09:19.027 inet 192.168.100.9/24 scope global mlx_0_1 00:09:19.027 valid_lft forever preferred_lft forever 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:09:19.027 192.168.100.9' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:09:19.027 192.168.100.9' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # head -n 1 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:09:19.027 192.168.100.9' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # tail -n +2 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # head -n 1 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:09:19.027 run this test only with TCP transport for now 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.027 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:19.027 rmmod nvme_rdma 00:09:19.027 rmmod nvme_fabrics 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:09:19.287 00:09:19.287 real 0m7.030s 00:09:19.287 user 0m2.003s 00:09:19.287 sys 0m5.225s 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:19.287 ************************************ 00:09:19.287 END TEST nvmf_target_multipath 00:09:19.287 ************************************ 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.287 ************************************ 00:09:19.287 START TEST nvmf_zcopy 00:09:19.287 ************************************ 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:19.287 * Looking for test storage... 00:09:19.287 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:19.287 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:19.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.548 --rc genhtml_branch_coverage=1 00:09:19.548 --rc genhtml_function_coverage=1 00:09:19.548 --rc genhtml_legend=1 00:09:19.548 --rc geninfo_all_blocks=1 00:09:19.548 --rc geninfo_unexecuted_blocks=1 00:09:19.548 00:09:19.548 ' 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:19.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.548 --rc genhtml_branch_coverage=1 00:09:19.548 --rc genhtml_function_coverage=1 00:09:19.548 --rc genhtml_legend=1 00:09:19.548 --rc geninfo_all_blocks=1 00:09:19.548 --rc geninfo_unexecuted_blocks=1 00:09:19.548 00:09:19.548 ' 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:19.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.548 --rc genhtml_branch_coverage=1 00:09:19.548 --rc genhtml_function_coverage=1 00:09:19.548 --rc genhtml_legend=1 00:09:19.548 --rc geninfo_all_blocks=1 00:09:19.548 --rc geninfo_unexecuted_blocks=1 00:09:19.548 00:09:19.548 ' 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:19.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.548 --rc genhtml_branch_coverage=1 00:09:19.548 --rc genhtml_function_coverage=1 00:09:19.548 --rc genhtml_legend=1 00:09:19.548 --rc geninfo_all_blocks=1 00:09:19.548 --rc geninfo_unexecuted_blocks=1 00:09:19.548 00:09:19.548 ' 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.548 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.549 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.549 21:38:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.124 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:26.125 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:26.125 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:26.125 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:26.125 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # rdma_device_init 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@526 -- # allocate_nic_ips 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:26.125 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:26.386 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:26.386 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:26.386 altname enp217s0f0np0 00:09:26.386 altname ens818f0np0 00:09:26.386 inet 192.168.100.8/24 scope global mlx_0_0 00:09:26.386 valid_lft forever preferred_lft forever 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:26.386 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:26.386 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:26.386 altname enp217s0f1np1 00:09:26.386 altname ens818f1np1 00:09:26.386 inet 192.168.100.9/24 scope global mlx_0_1 00:09:26.386 valid_lft forever preferred_lft forever 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:09:26.386 192.168.100.9' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:09:26.386 192.168.100.9' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # head -n 1 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:09:26.386 192.168.100.9' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # tail -n +2 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # head -n 1 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=2914493 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 2914493 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2914493 ']' 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.386 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.387 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.387 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.387 [2024-11-29 21:38:58.581637] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:26.387 [2024-11-29 21:38:58.581703] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.646 [2024-11-29 21:38:58.653650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.646 [2024-11-29 21:38:58.691287] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.646 [2024-11-29 21:38:58.691332] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.646 [2024-11-29 21:38:58.691342] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.646 [2024-11-29 21:38:58.691351] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.646 [2024-11-29 21:38:58.691358] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.646 [2024-11-29 21:38:58.691382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.646 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.646 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:26.646 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:26.646 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:26.646 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.646 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.646 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:09:26.646 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:09:26.646 Unsupported transport: rdma 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@808 -- # type=--id 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@809 -- # id=0 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:26.647 nvmf_trace.0 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # return 0 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.647 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:26.647 rmmod nvme_rdma 00:09:26.907 rmmod nvme_fabrics 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 2914493 ']' 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 2914493 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2914493 ']' 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2914493 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2914493 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2914493' 00:09:26.907 killing process with pid 2914493 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2914493 00:09:26.907 21:38:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2914493 00:09:26.907 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:26.907 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:09:26.907 00:09:26.907 real 0m7.757s 00:09:26.907 user 0m2.827s 00:09:26.907 sys 0m5.562s 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.167 ************************************ 00:09:27.167 END TEST nvmf_zcopy 00:09:27.167 ************************************ 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.167 ************************************ 00:09:27.167 START TEST nvmf_nmic 00:09:27.167 ************************************ 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:27.167 * Looking for test storage... 00:09:27.167 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.167 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:27.427 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:27.427 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.427 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:27.427 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.427 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.427 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.427 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:27.427 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.427 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:27.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.427 --rc genhtml_branch_coverage=1 00:09:27.427 --rc genhtml_function_coverage=1 00:09:27.427 --rc genhtml_legend=1 00:09:27.427 --rc geninfo_all_blocks=1 00:09:27.427 --rc geninfo_unexecuted_blocks=1 00:09:27.427 00:09:27.427 ' 00:09:27.427 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:27.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.427 --rc genhtml_branch_coverage=1 00:09:27.427 --rc genhtml_function_coverage=1 00:09:27.427 --rc genhtml_legend=1 00:09:27.427 --rc geninfo_all_blocks=1 00:09:27.427 --rc geninfo_unexecuted_blocks=1 00:09:27.427 00:09:27.427 ' 00:09:27.427 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:27.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.427 --rc genhtml_branch_coverage=1 00:09:27.427 --rc genhtml_function_coverage=1 00:09:27.427 --rc genhtml_legend=1 00:09:27.427 --rc geninfo_all_blocks=1 00:09:27.427 --rc geninfo_unexecuted_blocks=1 00:09:27.427 00:09:27.427 ' 00:09:27.427 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:27.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.427 --rc genhtml_branch_coverage=1 00:09:27.427 --rc genhtml_function_coverage=1 00:09:27.427 --rc genhtml_legend=1 00:09:27.428 --rc geninfo_all_blocks=1 00:09:27.428 --rc geninfo_unexecuted_blocks=1 00:09:27.428 00:09:27.428 ' 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.428 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.428 21:38:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:34.002 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:34.002 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:34.002 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:34.002 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # rdma_device_init 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@526 -- # allocate_nic_ips 00:09:34.002 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:34.262 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:34.262 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.262 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:34.262 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:34.262 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.262 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:34.262 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:34.262 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.262 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.262 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:34.263 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.263 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:34.263 altname enp217s0f0np0 00:09:34.263 altname ens818f0np0 00:09:34.263 inet 192.168.100.8/24 scope global mlx_0_0 00:09:34.263 valid_lft forever preferred_lft forever 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:34.263 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.263 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:34.263 altname enp217s0f1np1 00:09:34.263 altname ens818f1np1 00:09:34.263 inet 192.168.100.9/24 scope global mlx_0_1 00:09:34.263 valid_lft forever preferred_lft forever 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:09:34.263 192.168.100.9' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:09:34.263 192.168.100.9' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # head -n 1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:09:34.263 192.168.100.9' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # tail -n +2 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # head -n 1 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=2918591 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 2918591 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2918591 ']' 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.263 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.263 [2024-11-29 21:39:06.500299] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:34.263 [2024-11-29 21:39:06.500362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.523 [2024-11-29 21:39:06.572589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.523 [2024-11-29 21:39:06.615048] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.523 [2024-11-29 21:39:06.615110] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.523 [2024-11-29 21:39:06.615120] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.523 [2024-11-29 21:39:06.615128] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.523 [2024-11-29 21:39:06.615135] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.523 [2024-11-29 21:39:06.615181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.523 [2024-11-29 21:39:06.615198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.523 [2024-11-29 21:39:06.615290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.523 [2024-11-29 21:39:06.615289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.523 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.523 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:34.523 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:34.523 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.523 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.523 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.523 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.784 [2024-11-29 21:39:06.800624] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d87f50/0x1d8c400) succeed. 00:09:34.784 [2024-11-29 21:39:06.811183] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d89540/0x1dcdaa0) succeed. 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.784 Malloc0 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.784 [2024-11-29 21:39:06.977132] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:34.784 test case1: single bdev can't be used in multiple subsystems 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.784 21:39:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.784 [2024-11-29 21:39:07.008935] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:34.784 [2024-11-29 21:39:07.008955] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:34.784 [2024-11-29 21:39:07.008965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.784 request: 00:09:34.784 { 00:09:34.784 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:34.784 "namespace": { 00:09:34.784 "bdev_name": "Malloc0", 00:09:34.784 "no_auto_visible": false 00:09:34.784 }, 00:09:34.784 "method": "nvmf_subsystem_add_ns", 00:09:34.784 "req_id": 1 00:09:34.784 } 00:09:34.784 Got JSON-RPC error response 00:09:34.784 response: 00:09:34.784 { 00:09:34.784 "code": -32602, 00:09:34.784 "message": "Invalid parameters" 00:09:34.784 } 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:34.784 Adding namespace failed - expected result. 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:34.784 test case2: host connect to nvmf target in multiple paths 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.784 [2024-11-29 21:39:07.025009] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.784 21:39:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:36.163 21:39:08 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:09:37.100 21:39:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:37.100 21:39:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:37.100 21:39:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.100 21:39:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:37.100 21:39:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:39.006 21:39:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:39.006 21:39:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:39.006 21:39:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.006 21:39:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:39.006 21:39:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.006 21:39:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:39.006 21:39:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:39.006 [global] 00:09:39.006 thread=1 00:09:39.006 invalidate=1 00:09:39.006 rw=write 00:09:39.006 time_based=1 00:09:39.006 runtime=1 00:09:39.006 ioengine=libaio 00:09:39.006 direct=1 00:09:39.006 bs=4096 00:09:39.006 iodepth=1 00:09:39.006 norandommap=0 00:09:39.006 numjobs=1 00:09:39.006 00:09:39.006 verify_dump=1 00:09:39.006 verify_backlog=512 00:09:39.006 verify_state_save=0 00:09:39.006 do_verify=1 00:09:39.006 verify=crc32c-intel 00:09:39.006 [job0] 00:09:39.006 filename=/dev/nvme0n1 00:09:39.006 Could not set queue depth (nvme0n1) 00:09:39.265 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.265 fio-3.35 00:09:39.265 Starting 1 thread 00:09:40.666 00:09:40.666 job0: (groupid=0, jobs=1): err= 0: pid=2919649: Fri Nov 29 21:39:12 2024 00:09:40.666 read: IOPS=7057, BW=27.6MiB/s (28.9MB/s)(27.6MiB/1001msec) 00:09:40.666 slat (nsec): min=8381, max=31690, avg=8952.26, stdev=817.94 00:09:40.666 clat (usec): min=40, max=102, avg=58.30, stdev= 3.50 00:09:40.666 lat (usec): min=58, max=111, avg=67.25, stdev= 3.56 00:09:40.666 clat percentiles (usec): 00:09:40.666 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:09:40.666 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:09:40.666 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 63], 95.00th=[ 65], 00:09:40.666 | 99.00th=[ 68], 99.50th=[ 69], 99.90th=[ 72], 99.95th=[ 79], 00:09:40.666 | 99.99th=[ 103] 00:09:40.666 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:09:40.666 slat (nsec): min=10701, max=43940, avg=11527.52, stdev=1167.18 00:09:40.666 clat (nsec): min=34785, max=90942, avg=56518.38, stdev=3568.39 00:09:40.666 lat (usec): min=59, max=133, avg=68.05, stdev= 3.78 00:09:40.666 clat percentiles (nsec): 00:09:40.667 | 1.00th=[49920], 5.00th=[51456], 10.00th=[52480], 20.00th=[53504], 00:09:40.667 | 30.00th=[54528], 40.00th=[55040], 50.00th=[56064], 60.00th=[57088], 00:09:40.667 | 70.00th=[58112], 80.00th=[59648], 90.00th=[61184], 95.00th=[62720], 00:09:40.667 | 99.00th=[66048], 99.50th=[67072], 99.90th=[71168], 99.95th=[80384], 00:09:40.667 | 99.99th=[90624] 00:09:40.667 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:09:40.667 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:09:40.667 lat (usec) : 50=0.59%, 100=99.40%, 250=0.01% 00:09:40.667 cpu : usr=12.10%, sys=18.10%, ctx=14234, majf=0, minf=1 00:09:40.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.667 issued rwts: total=7065,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.667 00:09:40.667 Run status group 0 (all jobs): 00:09:40.667 READ: bw=27.6MiB/s (28.9MB/s), 27.6MiB/s-27.6MiB/s (28.9MB/s-28.9MB/s), io=27.6MiB (28.9MB), run=1001-1001msec 00:09:40.667 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:09:40.667 00:09:40.667 Disk stats (read/write): 00:09:40.667 nvme0n1: ios=6194/6650, merge=0/0, ticks=320/307, in_queue=627, util=90.68% 00:09:40.667 21:39:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:42.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:42.572 rmmod nvme_rdma 00:09:42.572 rmmod nvme_fabrics 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 2918591 ']' 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 2918591 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2918591 ']' 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2918591 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2918591 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2918591' 00:09:42.572 killing process with pid 2918591 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2918591 00:09:42.572 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2918591 00:09:42.832 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:42.832 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:09:42.832 00:09:42.832 real 0m15.649s 00:09:42.832 user 0m42.936s 00:09:42.832 sys 0m6.245s 00:09:42.832 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.832 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.832 ************************************ 00:09:42.832 END TEST nvmf_nmic 00:09:42.832 ************************************ 00:09:42.832 21:39:14 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:42.832 21:39:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:42.832 21:39:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.832 21:39:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.832 ************************************ 00:09:42.832 START TEST nvmf_fio_target 00:09:42.832 ************************************ 00:09:42.832 21:39:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:42.832 * Looking for test storage... 00:09:42.832 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:42.832 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:42.832 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:09:42.832 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:43.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.092 --rc genhtml_branch_coverage=1 00:09:43.092 --rc genhtml_function_coverage=1 00:09:43.092 --rc genhtml_legend=1 00:09:43.092 --rc geninfo_all_blocks=1 00:09:43.092 --rc geninfo_unexecuted_blocks=1 00:09:43.092 00:09:43.092 ' 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:43.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.092 --rc genhtml_branch_coverage=1 00:09:43.092 --rc genhtml_function_coverage=1 00:09:43.092 --rc genhtml_legend=1 00:09:43.092 --rc geninfo_all_blocks=1 00:09:43.092 --rc geninfo_unexecuted_blocks=1 00:09:43.092 00:09:43.092 ' 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:43.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.092 --rc genhtml_branch_coverage=1 00:09:43.092 --rc genhtml_function_coverage=1 00:09:43.092 --rc genhtml_legend=1 00:09:43.092 --rc geninfo_all_blocks=1 00:09:43.092 --rc geninfo_unexecuted_blocks=1 00:09:43.092 00:09:43.092 ' 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:43.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.092 --rc genhtml_branch_coverage=1 00:09:43.092 --rc genhtml_function_coverage=1 00:09:43.092 --rc genhtml_legend=1 00:09:43.092 --rc geninfo_all_blocks=1 00:09:43.092 --rc geninfo_unexecuted_blocks=1 00:09:43.092 00:09:43.092 ' 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.092 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.092 21:39:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:49.666 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:49.666 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:49.666 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:49.666 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # rdma_device_init 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@526 -- # allocate_nic_ips 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:49.666 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:49.667 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:49.667 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:49.667 altname enp217s0f0np0 00:09:49.667 altname ens818f0np0 00:09:49.667 inet 192.168.100.8/24 scope global mlx_0_0 00:09:49.667 valid_lft forever preferred_lft forever 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:49.667 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:49.667 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:49.667 altname enp217s0f1np1 00:09:49.667 altname ens818f1np1 00:09:49.667 inet 192.168.100.9/24 scope global mlx_0_1 00:09:49.667 valid_lft forever preferred_lft forever 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:09:49.667 192.168.100.9' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:09:49.667 192.168.100.9' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # head -n 1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:09:49.667 192.168.100.9' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # tail -n +2 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # head -n 1 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=2923438 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 2923438 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2923438 ']' 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.667 21:39:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:49.927 [2024-11-29 21:39:21.915874] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:49.927 [2024-11-29 21:39:21.915926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.927 [2024-11-29 21:39:21.985275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.927 [2024-11-29 21:39:22.025426] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.927 [2024-11-29 21:39:22.025468] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.927 [2024-11-29 21:39:22.025478] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.927 [2024-11-29 21:39:22.025486] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.927 [2024-11-29 21:39:22.025493] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.927 [2024-11-29 21:39:22.025545] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.927 [2024-11-29 21:39:22.025639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.927 [2024-11-29 21:39:22.025729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.927 [2024-11-29 21:39:22.025731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.927 21:39:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.927 21:39:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:49.927 21:39:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:49.927 21:39:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:49.927 21:39:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.927 21:39:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.927 21:39:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:50.186 [2024-11-29 21:39:22.368082] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1708f50/0x170d400) succeed. 00:09:50.186 [2024-11-29 21:39:22.378621] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x170a540/0x174eaa0) succeed. 00:09:50.445 21:39:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.705 21:39:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:50.705 21:39:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.705 21:39:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:50.705 21:39:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.964 21:39:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:50.964 21:39:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.223 21:39:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:51.223 21:39:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:51.482 21:39:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.741 21:39:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:51.741 21:39:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.001 21:39:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:52.001 21:39:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.001 21:39:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:52.001 21:39:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:52.259 21:39:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:52.519 21:39:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:52.519 21:39:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.778 21:39:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:52.778 21:39:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:52.778 21:39:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:53.051 [2024-11-29 21:39:25.166009] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:53.052 21:39:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:53.318 21:39:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:53.576 21:39:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:54.514 21:39:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:54.514 21:39:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:54.514 21:39:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.514 21:39:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:54.514 21:39:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:54.514 21:39:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:56.476 21:39:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:56.476 21:39:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:56.476 21:39:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:56.476 21:39:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:56.476 21:39:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:56.476 21:39:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:56.476 21:39:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:56.476 [global] 00:09:56.476 thread=1 00:09:56.476 invalidate=1 00:09:56.476 rw=write 00:09:56.476 time_based=1 00:09:56.476 runtime=1 00:09:56.476 ioengine=libaio 00:09:56.476 direct=1 00:09:56.476 bs=4096 00:09:56.476 iodepth=1 00:09:56.476 norandommap=0 00:09:56.476 numjobs=1 00:09:56.476 00:09:56.476 verify_dump=1 00:09:56.476 verify_backlog=512 00:09:56.476 verify_state_save=0 00:09:56.476 do_verify=1 00:09:56.476 verify=crc32c-intel 00:09:56.476 [job0] 00:09:56.476 filename=/dev/nvme0n1 00:09:56.476 [job1] 00:09:56.476 filename=/dev/nvme0n2 00:09:56.476 [job2] 00:09:56.476 filename=/dev/nvme0n3 00:09:56.476 [job3] 00:09:56.476 filename=/dev/nvme0n4 00:09:56.755 Could not set queue depth (nvme0n1) 00:09:56.755 Could not set queue depth (nvme0n2) 00:09:56.755 Could not set queue depth (nvme0n3) 00:09:56.755 Could not set queue depth (nvme0n4) 00:09:57.013 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.013 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.013 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.013 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.013 fio-3.35 00:09:57.013 Starting 4 threads 00:09:58.392 00:09:58.392 job0: (groupid=0, jobs=1): err= 0: pid=2924965: Fri Nov 29 21:39:30 2024 00:09:58.392 read: IOPS=3454, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1001msec) 00:09:58.392 slat (nsec): min=8392, max=35780, avg=9250.69, stdev=1083.28 00:09:58.392 clat (usec): min=63, max=202, avg=129.88, stdev=22.87 00:09:58.392 lat (usec): min=78, max=211, avg=139.13, stdev=22.96 00:09:58.392 clat percentiles (usec): 00:09:58.392 | 1.00th=[ 74], 5.00th=[ 78], 10.00th=[ 83], 20.00th=[ 128], 00:09:58.392 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 137], 00:09:58.392 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 157], 00:09:58.392 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 194], 99.95th=[ 202], 00:09:58.392 | 99.99th=[ 202] 00:09:58.392 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:58.392 slat (nsec): min=10237, max=55527, avg=11739.06, stdev=1986.84 00:09:58.392 clat (usec): min=62, max=444, avg=128.47, stdev=26.45 00:09:58.392 lat (usec): min=74, max=455, avg=140.21, stdev=26.52 00:09:58.392 clat percentiles (usec): 00:09:58.392 | 1.00th=[ 70], 5.00th=[ 74], 10.00th=[ 78], 20.00th=[ 125], 00:09:58.392 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 137], 00:09:58.392 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 155], 00:09:58.392 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 273], 99.95th=[ 396], 00:09:58.392 | 99.99th=[ 445] 00:09:58.392 bw ( KiB/s): min=16384, max=16384, per=27.86%, avg=16384.00, stdev= 0.00, samples=1 00:09:58.392 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:58.392 lat (usec) : 100=15.98%, 250=83.95%, 500=0.07% 00:09:58.392 cpu : usr=4.50%, sys=10.60%, ctx=7043, majf=0, minf=1 00:09:58.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.392 issued rwts: total=3458,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.392 job1: (groupid=0, jobs=1): err= 0: pid=2924966: Fri Nov 29 21:39:30 2024 00:09:58.392 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:09:58.392 slat (nsec): min=8429, max=30791, avg=11004.62, stdev=3093.56 00:09:58.392 clat (usec): min=60, max=438, avg=121.62, stdev=27.24 00:09:58.392 lat (usec): min=75, max=455, avg=132.62, stdev=27.16 00:09:58.392 clat percentiles (usec): 00:09:58.392 | 1.00th=[ 72], 5.00th=[ 76], 10.00th=[ 79], 20.00th=[ 85], 00:09:58.392 | 30.00th=[ 124], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:09:58.392 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 143], 95.00th=[ 149], 00:09:58.392 | 99.00th=[ 174], 99.50th=[ 184], 99.90th=[ 206], 99.95th=[ 355], 00:09:58.392 | 99.99th=[ 441] 00:09:58.392 write: IOPS=3780, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1001msec); 0 zone resets 00:09:58.392 slat (nsec): min=9263, max=61373, avg=13320.16, stdev=3682.28 00:09:58.392 clat (usec): min=56, max=343, avg=119.89, stdev=29.95 00:09:58.392 lat (usec): min=74, max=355, avg=133.21, stdev=29.49 00:09:58.392 clat percentiles (usec): 00:09:58.392 | 1.00th=[ 67], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 81], 00:09:58.392 | 30.00th=[ 116], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:09:58.392 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 159], 00:09:58.392 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 233], 99.95th=[ 338], 00:09:58.392 | 99.99th=[ 343] 00:09:58.392 bw ( KiB/s): min=16384, max=16384, per=27.86%, avg=16384.00, stdev= 0.00, samples=1 00:09:58.392 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:58.392 lat (usec) : 100=28.14%, 250=71.81%, 500=0.05% 00:09:58.392 cpu : usr=5.50%, sys=10.30%, ctx=7369, majf=0, minf=1 00:09:58.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.392 issued rwts: total=3584,3784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.392 job2: (groupid=0, jobs=1): err= 0: pid=2924967: Fri Nov 29 21:39:30 2024 00:09:58.392 read: IOPS=3123, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec) 00:09:58.392 slat (nsec): min=6156, max=33654, avg=10262.40, stdev=2907.68 00:09:58.392 clat (usec): min=65, max=443, avg=135.91, stdev=21.08 00:09:58.392 lat (usec): min=79, max=453, avg=146.17, stdev=21.50 00:09:58.392 clat percentiles (usec): 00:09:58.392 | 1.00th=[ 81], 5.00th=[ 89], 10.00th=[ 121], 20.00th=[ 131], 00:09:58.392 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:09:58.392 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 163], 00:09:58.392 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 347], 99.95th=[ 416], 00:09:58.392 | 99.99th=[ 445] 00:09:58.392 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:58.392 slat (nsec): min=10534, max=46172, avg=12682.63, stdev=3223.15 00:09:58.392 clat (usec): min=68, max=196, avg=134.38, stdev=13.38 00:09:58.392 lat (usec): min=80, max=223, avg=147.06, stdev=14.52 00:09:58.392 clat percentiles (usec): 00:09:58.392 | 1.00th=[ 89], 5.00th=[ 115], 10.00th=[ 122], 20.00th=[ 129], 00:09:58.392 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:09:58.392 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 157], 00:09:58.392 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 194], 99.95th=[ 196], 00:09:58.392 | 99.99th=[ 196] 00:09:58.392 bw ( KiB/s): min=14936, max=14936, per=25.40%, avg=14936.00, stdev= 0.00, samples=1 00:09:58.392 iops : min= 3734, max= 3734, avg=3734.00, stdev= 0.00, samples=1 00:09:58.392 lat (usec) : 100=4.77%, 250=95.16%, 500=0.07% 00:09:58.392 cpu : usr=5.40%, sys=9.00%, ctx=6711, majf=0, minf=1 00:09:58.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.392 issued rwts: total=3127,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.393 job3: (groupid=0, jobs=1): err= 0: pid=2924968: Fri Nov 29 21:39:30 2024 00:09:58.393 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:09:58.393 slat (nsec): min=8629, max=32808, avg=9343.23, stdev=960.28 00:09:58.393 clat (usec): min=72, max=441, avg=125.62, stdev=25.00 00:09:58.393 lat (usec): min=81, max=450, avg=134.96, stdev=25.05 00:09:58.393 clat percentiles (usec): 00:09:58.393 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 91], 00:09:58.393 | 30.00th=[ 130], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:09:58.393 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 151], 00:09:58.393 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 198], 99.95th=[ 355], 00:09:58.393 | 99.99th=[ 441] 00:09:58.393 write: IOPS=3759, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1001msec); 0 zone resets 00:09:58.393 slat (nsec): min=10297, max=40748, avg=11623.28, stdev=1120.67 00:09:58.393 clat (usec): min=68, max=344, avg=121.03, stdev=24.51 00:09:58.393 lat (usec): min=80, max=355, avg=132.65, stdev=24.46 00:09:58.393 clat percentiles (usec): 00:09:58.393 | 1.00th=[ 75], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 87], 00:09:58.393 | 30.00th=[ 120], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:09:58.393 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 141], 95.00th=[ 145], 00:09:58.393 | 99.00th=[ 161], 99.50th=[ 172], 99.90th=[ 241], 99.95th=[ 343], 00:09:58.393 | 99.99th=[ 347] 00:09:58.393 bw ( KiB/s): min=16384, max=16384, per=27.86%, avg=16384.00, stdev= 0.00, samples=1 00:09:58.393 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:58.393 lat (usec) : 100=24.96%, 250=74.98%, 500=0.05% 00:09:58.393 cpu : usr=5.50%, sys=10.30%, ctx=7347, majf=0, minf=1 00:09:58.393 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.393 issued rwts: total=3584,3763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.393 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.393 00:09:58.393 Run status group 0 (all jobs): 00:09:58.393 READ: bw=53.7MiB/s (56.3MB/s), 12.2MiB/s-14.0MiB/s (12.8MB/s-14.7MB/s), io=53.7MiB (56.3MB), run=1001-1001msec 00:09:58.393 WRITE: bw=57.4MiB/s (60.2MB/s), 14.0MiB/s-14.8MiB/s (14.7MB/s-15.5MB/s), io=57.5MiB (60.3MB), run=1001-1001msec 00:09:58.393 00:09:58.393 Disk stats (read/write): 00:09:58.393 nvme0n1: ios=2922/3072, merge=0/0, ticks=363/351, in_queue=714, util=84.67% 00:09:58.393 nvme0n2: ios=3072/3203, merge=0/0, ticks=337/342, in_queue=679, util=85.31% 00:09:58.393 nvme0n3: ios=2560/3054, merge=0/0, ticks=329/379, in_queue=708, util=88.47% 00:09:58.393 nvme0n4: ios=3072/3176, merge=0/0, ticks=346/347, in_queue=693, util=89.51% 00:09:58.393 21:39:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:58.393 [global] 00:09:58.393 thread=1 00:09:58.393 invalidate=1 00:09:58.393 rw=randwrite 00:09:58.393 time_based=1 00:09:58.393 runtime=1 00:09:58.393 ioengine=libaio 00:09:58.393 direct=1 00:09:58.393 bs=4096 00:09:58.393 iodepth=1 00:09:58.393 norandommap=0 00:09:58.393 numjobs=1 00:09:58.393 00:09:58.393 verify_dump=1 00:09:58.393 verify_backlog=512 00:09:58.393 verify_state_save=0 00:09:58.393 do_verify=1 00:09:58.393 verify=crc32c-intel 00:09:58.393 [job0] 00:09:58.393 filename=/dev/nvme0n1 00:09:58.393 [job1] 00:09:58.393 filename=/dev/nvme0n2 00:09:58.393 [job2] 00:09:58.393 filename=/dev/nvme0n3 00:09:58.393 [job3] 00:09:58.393 filename=/dev/nvme0n4 00:09:58.393 Could not set queue depth (nvme0n1) 00:09:58.393 Could not set queue depth (nvme0n2) 00:09:58.393 Could not set queue depth (nvme0n3) 00:09:58.393 Could not set queue depth (nvme0n4) 00:09:58.393 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.393 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.393 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.393 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.393 fio-3.35 00:09:58.393 Starting 4 threads 00:09:59.772 00:09:59.772 job0: (groupid=0, jobs=1): err= 0: pid=2925395: Fri Nov 29 21:39:31 2024 00:09:59.772 read: IOPS=3226, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec) 00:09:59.772 slat (nsec): min=8359, max=39849, avg=11094.83, stdev=4165.00 00:09:59.772 clat (usec): min=67, max=225, avg=136.32, stdev=22.67 00:09:59.772 lat (usec): min=76, max=244, avg=147.42, stdev=24.19 00:09:59.772 clat percentiles (usec): 00:09:59.772 | 1.00th=[ 78], 5.00th=[ 95], 10.00th=[ 108], 20.00th=[ 119], 00:09:59.772 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:09:59.772 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 165], 95.00th=[ 176], 00:09:59.772 | 99.00th=[ 194], 99.50th=[ 200], 99.90th=[ 212], 99.95th=[ 221], 00:09:59.772 | 99.99th=[ 227] 00:09:59.772 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:59.772 slat (nsec): min=10414, max=68968, avg=12703.49, stdev=4411.66 00:09:59.772 clat (usec): min=67, max=271, avg=128.45, stdev=22.05 00:09:59.772 lat (usec): min=79, max=296, avg=141.16, stdev=23.93 00:09:59.772 clat percentiles (usec): 00:09:59.772 | 1.00th=[ 79], 5.00th=[ 94], 10.00th=[ 100], 20.00th=[ 105], 00:09:59.772 | 30.00th=[ 116], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:09:59.772 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 155], 95.00th=[ 165], 00:09:59.772 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 204], 99.95th=[ 212], 00:09:59.772 | 99.99th=[ 273] 00:09:59.772 bw ( KiB/s): min=15352, max=15352, per=25.13%, avg=15352.00, stdev= 0.00, samples=1 00:09:59.772 iops : min= 3838, max= 3838, avg=3838.00, stdev= 0.00, samples=1 00:09:59.772 lat (usec) : 100=8.85%, 250=91.14%, 500=0.01% 00:09:59.772 cpu : usr=5.20%, sys=9.80%, ctx=6817, majf=0, minf=1 00:09:59.772 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.772 issued rwts: total=3230,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.772 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.772 job1: (groupid=0, jobs=1): err= 0: pid=2925396: Fri Nov 29 21:39:31 2024 00:09:59.772 read: IOPS=4293, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1001msec) 00:09:59.772 slat (nsec): min=8384, max=22535, avg=9159.02, stdev=932.89 00:09:59.772 clat (usec): min=57, max=184, avg=98.62, stdev=27.04 00:09:59.772 lat (usec): min=67, max=195, avg=107.78, stdev=27.27 00:09:59.772 clat percentiles (usec): 00:09:59.772 | 1.00th=[ 70], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:09:59.772 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 97], 00:09:59.772 | 70.00th=[ 118], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 143], 00:09:59.772 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 176], 99.95th=[ 180], 00:09:59.772 | 99.99th=[ 184] 00:09:59.772 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:09:59.772 slat (nsec): min=10233, max=44200, avg=11442.29, stdev=1235.91 00:09:59.772 clat (usec): min=58, max=265, avg=100.00, stdev=26.76 00:09:59.772 lat (usec): min=74, max=277, avg=111.45, stdev=26.99 00:09:59.772 clat percentiles (usec): 00:09:59.772 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 74], 00:09:59.772 | 30.00th=[ 77], 40.00th=[ 80], 50.00th=[ 98], 60.00th=[ 105], 00:09:59.772 | 70.00th=[ 127], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 139], 00:09:59.772 | 99.00th=[ 149], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 182], 00:09:59.772 | 99.99th=[ 265] 00:09:59.773 bw ( KiB/s): min=20480, max=20480, per=33.52%, avg=20480.00, stdev= 0.00, samples=1 00:09:59.773 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:09:59.773 lat (usec) : 100=56.22%, 250=43.77%, 500=0.01% 00:09:59.773 cpu : usr=8.20%, sys=11.00%, ctx=8906, majf=0, minf=1 00:09:59.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.773 issued rwts: total=4298,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.773 job2: (groupid=0, jobs=1): err= 0: pid=2925399: Fri Nov 29 21:39:31 2024 00:09:59.773 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:59.773 slat (nsec): min=8734, max=48786, avg=11704.33, stdev=3895.24 00:09:59.773 clat (usec): min=75, max=232, avg=139.71, stdev=19.56 00:09:59.773 lat (usec): min=85, max=252, avg=151.41, stdev=21.03 00:09:59.773 clat percentiles (usec): 00:09:59.773 | 1.00th=[ 86], 5.00th=[ 114], 10.00th=[ 122], 20.00th=[ 129], 00:09:59.773 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:09:59.773 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 178], 00:09:59.773 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 212], 99.95th=[ 215], 00:09:59.773 | 99.99th=[ 233] 00:09:59.773 write: IOPS=3509, BW=13.7MiB/s (14.4MB/s)(13.7MiB/1001msec); 0 zone resets 00:09:59.773 slat (nsec): min=10389, max=68345, avg=13818.20, stdev=3763.28 00:09:59.773 clat (usec): min=71, max=209, avg=132.79, stdev=19.45 00:09:59.773 lat (usec): min=83, max=232, avg=146.60, stdev=20.52 00:09:59.773 clat percentiles (usec): 00:09:59.773 | 1.00th=[ 80], 5.00th=[ 105], 10.00th=[ 113], 20.00th=[ 120], 00:09:59.773 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:09:59.773 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 157], 95.00th=[ 172], 00:09:59.773 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 206], 99.95th=[ 208], 00:09:59.773 | 99.99th=[ 210] 00:09:59.773 bw ( KiB/s): min=14048, max=14048, per=22.99%, avg=14048.00, stdev= 0.00, samples=1 00:09:59.773 iops : min= 3512, max= 3512, avg=3512.00, stdev= 0.00, samples=1 00:09:59.773 lat (usec) : 100=3.71%, 250=96.29% 00:09:59.773 cpu : usr=6.00%, sys=10.30%, ctx=6586, majf=0, minf=1 00:09:59.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.773 issued rwts: total=3072,3513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.773 job3: (groupid=0, jobs=1): err= 0: pid=2925400: Fri Nov 29 21:39:31 2024 00:09:59.773 read: IOPS=3089, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1001msec) 00:09:59.773 slat (nsec): min=8758, max=39087, avg=10941.60, stdev=2802.55 00:09:59.773 clat (usec): min=71, max=209, avg=138.66, stdev=18.74 00:09:59.773 lat (usec): min=87, max=233, avg=149.61, stdev=19.18 00:09:59.773 clat percentiles (usec): 00:09:59.773 | 1.00th=[ 88], 5.00th=[ 102], 10.00th=[ 122], 20.00th=[ 128], 00:09:59.773 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:09:59.773 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 163], 95.00th=[ 174], 00:09:59.773 | 99.00th=[ 194], 99.50th=[ 198], 99.90th=[ 204], 99.95th=[ 206], 00:09:59.773 | 99.99th=[ 210] 00:09:59.773 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:59.773 slat (usec): min=10, max=102, avg=13.46, stdev= 3.61 00:09:59.773 clat (usec): min=66, max=215, avg=131.05, stdev=17.80 00:09:59.773 lat (usec): min=84, max=236, avg=144.51, stdev=18.32 00:09:59.773 clat percentiles (usec): 00:09:59.773 | 1.00th=[ 85], 5.00th=[ 102], 10.00th=[ 113], 20.00th=[ 120], 00:09:59.773 | 30.00th=[ 125], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 135], 00:09:59.773 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 165], 00:09:59.773 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 198], 99.95th=[ 210], 00:09:59.773 | 99.99th=[ 217] 00:09:59.773 bw ( KiB/s): min=14749, max=14749, per=24.14%, avg=14749.00, stdev= 0.00, samples=1 00:09:59.773 iops : min= 3687, max= 3687, avg=3687.00, stdev= 0.00, samples=1 00:09:59.773 lat (usec) : 100=4.79%, 250=95.21% 00:09:59.773 cpu : usr=5.10%, sys=10.00%, ctx=6679, majf=0, minf=1 00:09:59.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.773 issued rwts: total=3093,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.773 00:09:59.773 Run status group 0 (all jobs): 00:09:59.773 READ: bw=53.4MiB/s (56.0MB/s), 12.0MiB/s-16.8MiB/s (12.6MB/s-17.6MB/s), io=53.5MiB (56.1MB), run=1001-1001msec 00:09:59.773 WRITE: bw=59.7MiB/s (62.6MB/s), 13.7MiB/s-18.0MiB/s (14.4MB/s-18.9MB/s), io=59.7MiB (62.6MB), run=1001-1001msec 00:09:59.773 00:09:59.773 Disk stats (read/write): 00:09:59.773 nvme0n1: ios=2674/3072, merge=0/0, ticks=345/368, in_queue=713, util=84.47% 00:09:59.773 nvme0n2: ios=3696/4096, merge=0/0, ticks=307/355, in_queue=662, util=85.20% 00:09:59.773 nvme0n3: ios=2560/2906, merge=0/0, ticks=337/359, in_queue=696, util=88.36% 00:09:59.773 nvme0n4: ios=2560/3014, merge=0/0, ticks=334/360, in_queue=694, util=89.40% 00:09:59.773 21:39:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:59.773 [global] 00:09:59.773 thread=1 00:09:59.773 invalidate=1 00:09:59.773 rw=write 00:09:59.773 time_based=1 00:09:59.773 runtime=1 00:09:59.773 ioengine=libaio 00:09:59.773 direct=1 00:09:59.773 bs=4096 00:09:59.773 iodepth=128 00:09:59.773 norandommap=0 00:09:59.773 numjobs=1 00:09:59.773 00:09:59.773 verify_dump=1 00:09:59.773 verify_backlog=512 00:09:59.773 verify_state_save=0 00:09:59.773 do_verify=1 00:09:59.773 verify=crc32c-intel 00:09:59.773 [job0] 00:09:59.773 filename=/dev/nvme0n1 00:09:59.773 [job1] 00:09:59.773 filename=/dev/nvme0n2 00:09:59.773 [job2] 00:09:59.773 filename=/dev/nvme0n3 00:09:59.773 [job3] 00:09:59.773 filename=/dev/nvme0n4 00:09:59.773 Could not set queue depth (nvme0n1) 00:09:59.773 Could not set queue depth (nvme0n2) 00:09:59.773 Could not set queue depth (nvme0n3) 00:09:59.773 Could not set queue depth (nvme0n4) 00:10:00.032 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.032 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.032 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.032 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.032 fio-3.35 00:10:00.032 Starting 4 threads 00:10:01.411 00:10:01.411 job0: (groupid=0, jobs=1): err= 0: pid=2925817: Fri Nov 29 21:39:33 2024 00:10:01.411 read: IOPS=9197, BW=35.9MiB/s (37.7MB/s)(36.0MiB/1002msec) 00:10:01.411 slat (usec): min=2, max=2271, avg=54.70, stdev=209.30 00:10:01.411 clat (usec): min=1359, max=12692, avg=7093.14, stdev=1514.82 00:10:01.411 lat (usec): min=1363, max=12696, avg=7147.84, stdev=1524.58 00:10:01.411 clat percentiles (usec): 00:10:01.411 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 6325], 00:10:01.411 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6915], 00:10:01.411 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[10552], 95.00th=[10683], 00:10:01.411 | 99.00th=[11076], 99.50th=[11338], 99.90th=[11863], 99.95th=[11863], 00:10:01.411 | 99.99th=[12649] 00:10:01.411 write: IOPS=9208, BW=36.0MiB/s (37.7MB/s)(36.0MiB/1002msec); 0 zone resets 00:10:01.411 slat (usec): min=2, max=2363, avg=50.97, stdev=192.33 00:10:01.411 clat (usec): min=562, max=11648, avg=6664.71, stdev=1543.17 00:10:01.411 lat (usec): min=1307, max=11652, avg=6715.68, stdev=1553.94 00:10:01.411 clat percentiles (usec): 00:10:01.411 | 1.00th=[ 4817], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5604], 00:10:01.411 | 30.00th=[ 6194], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:10:01.411 | 70.00th=[ 6587], 80.00th=[ 6849], 90.00th=[10421], 95.00th=[10683], 00:10:01.411 | 99.00th=[10945], 99.50th=[11076], 99.90th=[11076], 99.95th=[11076], 00:10:01.411 | 99.99th=[11600] 00:10:01.411 bw ( KiB/s): min=36864, max=36864, per=36.09%, avg=36864.00, stdev= 0.00, samples=2 00:10:01.411 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:10:01.411 lat (usec) : 750=0.01% 00:10:01.411 lat (msec) : 2=0.09%, 4=0.17%, 10=88.46%, 20=11.27% 00:10:01.411 cpu : usr=3.00%, sys=6.69%, ctx=1690, majf=0, minf=1 00:10:01.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:01.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.411 issued rwts: total=9216,9227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.411 job1: (groupid=0, jobs=1): err= 0: pid=2925818: Fri Nov 29 21:39:33 2024 00:10:01.411 read: IOPS=8003, BW=31.3MiB/s (32.8MB/s)(31.4MiB/1003msec) 00:10:01.411 slat (nsec): min=1996, max=1899.3k, avg=61607.30, stdev=209532.33 00:10:01.411 clat (usec): min=1868, max=19610, avg=7952.31, stdev=3116.46 00:10:01.411 lat (usec): min=2665, max=19835, avg=8013.91, stdev=3135.32 00:10:01.411 clat percentiles (usec): 00:10:01.411 | 1.00th=[ 5669], 5.00th=[ 6128], 10.00th=[ 6456], 20.00th=[ 6587], 00:10:01.411 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6783], 60.00th=[ 6915], 00:10:01.411 | 70.00th=[ 6980], 80.00th=[ 7177], 90.00th=[10814], 95.00th=[18482], 00:10:01.412 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:10:01.412 | 99.99th=[19530] 00:10:01.412 write: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec); 0 zone resets 00:10:01.412 slat (usec): min=2, max=1816, avg=59.16, stdev=199.33 00:10:01.412 clat (usec): min=2867, max=19147, avg=7653.21, stdev=3056.61 00:10:01.412 lat (usec): min=2876, max=19152, avg=7712.37, stdev=3073.92 00:10:01.412 clat percentiles (usec): 00:10:01.412 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 6128], 20.00th=[ 6194], 00:10:01.412 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6456], 60.00th=[ 6521], 00:10:01.412 | 70.00th=[ 6652], 80.00th=[ 7439], 90.00th=[10814], 95.00th=[17433], 00:10:01.412 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:10:01.412 | 99.99th=[19268] 00:10:01.412 bw ( KiB/s): min=26464, max=39072, per=32.08%, avg=32768.00, stdev=8915.20, samples=2 00:10:01.412 iops : min= 6616, max= 9768, avg=8192.00, stdev=2228.80, samples=2 00:10:01.412 lat (msec) : 2=0.01%, 4=0.30%, 10=81.24%, 20=18.45% 00:10:01.412 cpu : usr=2.99%, sys=4.79%, ctx=1755, majf=0, minf=1 00:10:01.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:01.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.412 issued rwts: total=8028,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.412 job2: (groupid=0, jobs=1): err= 0: pid=2925819: Fri Nov 29 21:39:33 2024 00:10:01.412 read: IOPS=4041, BW=15.8MiB/s (16.6MB/s)(15.8MiB/1003msec) 00:10:01.412 slat (usec): min=2, max=1435, avg=124.73, stdev=317.84 00:10:01.412 clat (usec): min=2034, max=19624, avg=15937.35, stdev=2038.89 00:10:01.412 lat (usec): min=2959, max=20094, avg=16062.08, stdev=2023.73 00:10:01.412 clat percentiles (usec): 00:10:01.412 | 1.00th=[ 6915], 5.00th=[13042], 10.00th=[13566], 20.00th=[14484], 00:10:01.412 | 30.00th=[15795], 40.00th=[16188], 50.00th=[16319], 60.00th=[16450], 00:10:01.412 | 70.00th=[16581], 80.00th=[16712], 90.00th=[18744], 95.00th=[19268], 00:10:01.412 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:10:01.412 | 99.99th=[19530] 00:10:01.412 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:01.412 slat (usec): min=2, max=1415, avg=117.29, stdev=297.73 00:10:01.412 clat (usec): min=11632, max=19671, avg=15137.55, stdev=1660.56 00:10:01.412 lat (usec): min=12428, max=19680, avg=15254.84, stdev=1646.50 00:10:01.412 clat percentiles (usec): 00:10:01.412 | 1.00th=[11863], 5.00th=[12649], 10.00th=[12649], 20.00th=[12911], 00:10:01.412 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:10:01.412 | 70.00th=[15795], 80.00th=[16057], 90.00th=[17695], 95.00th=[18220], 00:10:01.412 | 99.00th=[18744], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:10:01.412 | 99.99th=[19792] 00:10:01.412 bw ( KiB/s): min=16384, max=16384, per=16.04%, avg=16384.00, stdev= 0.00, samples=2 00:10:01.412 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:01.412 lat (msec) : 4=0.11%, 10=0.64%, 20=99.25% 00:10:01.412 cpu : usr=2.89%, sys=2.59%, ctx=1330, majf=0, minf=1 00:10:01.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:01.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.412 issued rwts: total=4054,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.412 job3: (groupid=0, jobs=1): err= 0: pid=2925820: Fri Nov 29 21:39:33 2024 00:10:01.412 read: IOPS=4038, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1003msec) 00:10:01.412 slat (usec): min=2, max=1496, avg=124.93, stdev=318.06 00:10:01.412 clat (usec): min=2046, max=20277, avg=15933.91, stdev=2057.28 00:10:01.412 lat (usec): min=2960, max=20426, avg=16058.83, stdev=2043.76 00:10:01.412 clat percentiles (usec): 00:10:01.412 | 1.00th=[ 6980], 5.00th=[12911], 10.00th=[13566], 20.00th=[14484], 00:10:01.412 | 30.00th=[15795], 40.00th=[16188], 50.00th=[16319], 60.00th=[16450], 00:10:01.412 | 70.00th=[16581], 80.00th=[16712], 90.00th=[18744], 95.00th=[19268], 00:10:01.412 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:10:01.412 | 99.99th=[20317] 00:10:01.412 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:01.412 slat (usec): min=2, max=1449, avg=117.27, stdev=297.94 00:10:01.412 clat (usec): min=11622, max=19088, avg=15143.43, stdev=1664.18 00:10:01.412 lat (usec): min=12393, max=19631, avg=15260.70, stdev=1650.07 00:10:01.412 clat percentiles (usec): 00:10:01.412 | 1.00th=[11863], 5.00th=[12649], 10.00th=[12649], 20.00th=[12911], 00:10:01.412 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:10:01.412 | 70.00th=[15795], 80.00th=[16057], 90.00th=[17695], 95.00th=[18220], 00:10:01.412 | 99.00th=[18744], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:10:01.412 | 99.99th=[19006] 00:10:01.412 bw ( KiB/s): min=16384, max=16384, per=16.04%, avg=16384.00, stdev= 0.00, samples=2 00:10:01.412 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:01.412 lat (msec) : 4=0.15%, 10=0.56%, 20=99.28%, 50=0.01% 00:10:01.412 cpu : usr=2.10%, sys=3.39%, ctx=1324, majf=0, minf=1 00:10:01.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:01.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.412 issued rwts: total=4051,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.412 00:10:01.412 Run status group 0 (all jobs): 00:10:01.412 READ: bw=98.7MiB/s (104MB/s), 15.8MiB/s-35.9MiB/s (16.5MB/s-37.7MB/s), io=99.0MiB (104MB), run=1002-1003msec 00:10:01.412 WRITE: bw=99.7MiB/s (105MB/s), 16.0MiB/s-36.0MiB/s (16.7MB/s-37.7MB/s), io=100MiB (105MB), run=1002-1003msec 00:10:01.412 00:10:01.412 Disk stats (read/write): 00:10:01.412 nvme0n1: ios=7660/7680, merge=0/0, ticks=13452/12653, in_queue=26105, util=84.85% 00:10:01.412 nvme0n2: ios=6332/6656, merge=0/0, ticks=20921/20812, in_queue=41733, util=85.50% 00:10:01.412 nvme0n3: ios=3231/3584, merge=0/0, ticks=12977/13503, in_queue=26480, util=88.38% 00:10:01.412 nvme0n4: ios=3228/3584, merge=0/0, ticks=12992/13490, in_queue=26482, util=89.42% 00:10:01.412 21:39:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:01.412 [global] 00:10:01.412 thread=1 00:10:01.412 invalidate=1 00:10:01.412 rw=randwrite 00:10:01.412 time_based=1 00:10:01.412 runtime=1 00:10:01.412 ioengine=libaio 00:10:01.412 direct=1 00:10:01.412 bs=4096 00:10:01.412 iodepth=128 00:10:01.412 norandommap=0 00:10:01.412 numjobs=1 00:10:01.412 00:10:01.412 verify_dump=1 00:10:01.412 verify_backlog=512 00:10:01.412 verify_state_save=0 00:10:01.412 do_verify=1 00:10:01.412 verify=crc32c-intel 00:10:01.412 [job0] 00:10:01.412 filename=/dev/nvme0n1 00:10:01.412 [job1] 00:10:01.412 filename=/dev/nvme0n2 00:10:01.412 [job2] 00:10:01.412 filename=/dev/nvme0n3 00:10:01.412 [job3] 00:10:01.412 filename=/dev/nvme0n4 00:10:01.412 Could not set queue depth (nvme0n1) 00:10:01.412 Could not set queue depth (nvme0n2) 00:10:01.412 Could not set queue depth (nvme0n3) 00:10:01.412 Could not set queue depth (nvme0n4) 00:10:01.671 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.671 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.671 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.671 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.671 fio-3.35 00:10:01.671 Starting 4 threads 00:10:03.050 00:10:03.050 job0: (groupid=0, jobs=1): err= 0: pid=2926244: Fri Nov 29 21:39:35 2024 00:10:03.050 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:10:03.050 slat (usec): min=2, max=1547, avg=135.93, stdev=297.54 00:10:03.050 clat (usec): min=15626, max=19487, avg=17483.45, stdev=446.06 00:10:03.050 lat (usec): min=16078, max=19711, avg=17619.38, stdev=452.67 00:10:03.050 clat percentiles (usec): 00:10:03.050 | 1.00th=[16319], 5.00th=[16712], 10.00th=[16909], 20.00th=[17171], 00:10:03.050 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:10:03.050 | 70.00th=[17695], 80.00th=[17957], 90.00th=[17957], 95.00th=[18220], 00:10:03.050 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19006], 99.95th=[19268], 00:10:03.050 | 99.99th=[19530] 00:10:03.050 write: IOPS=3921, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1006msec); 0 zone resets 00:10:03.050 slat (usec): min=2, max=1611, avg=126.50, stdev=274.40 00:10:03.050 clat (usec): min=4863, max=20705, avg=16337.45, stdev=1189.13 00:10:03.050 lat (usec): min=5448, max=20718, avg=16463.96, stdev=1193.35 00:10:03.050 clat percentiles (usec): 00:10:03.050 | 1.00th=[ 9634], 5.00th=[15664], 10.00th=[15795], 20.00th=[16057], 00:10:03.050 | 30.00th=[16319], 40.00th=[16450], 50.00th=[16581], 60.00th=[16581], 00:10:03.050 | 70.00th=[16712], 80.00th=[16712], 90.00th=[16909], 95.00th=[17171], 00:10:03.050 | 99.00th=[17695], 99.50th=[17957], 99.90th=[20579], 99.95th=[20579], 00:10:03.050 | 99.99th=[20579] 00:10:03.050 bw ( KiB/s): min=14160, max=16384, per=17.59%, avg=15272.00, stdev=1572.61, samples=2 00:10:03.050 iops : min= 3540, max= 4096, avg=3818.00, stdev=393.15, samples=2 00:10:03.050 lat (msec) : 10=0.57%, 20=99.38%, 50=0.05% 00:10:03.050 cpu : usr=2.09%, sys=3.78%, ctx=2606, majf=0, minf=1 00:10:03.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:03.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.050 issued rwts: total=3584,3945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.050 job1: (groupid=0, jobs=1): err= 0: pid=2926245: Fri Nov 29 21:39:35 2024 00:10:03.050 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:10:03.050 slat (usec): min=2, max=1594, avg=135.94, stdev=298.24 00:10:03.050 clat (usec): min=15858, max=19642, avg=17488.94, stdev=435.68 00:10:03.050 lat (usec): min=16106, max=19647, avg=17624.88, stdev=445.25 00:10:03.050 clat percentiles (usec): 00:10:03.050 | 1.00th=[16319], 5.00th=[16712], 10.00th=[16909], 20.00th=[17171], 00:10:03.050 | 30.00th=[17433], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:10:03.050 | 70.00th=[17695], 80.00th=[17957], 90.00th=[17957], 95.00th=[17957], 00:10:03.050 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19268], 99.95th=[19530], 00:10:03.050 | 99.99th=[19530] 00:10:03.050 write: IOPS=3913, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1006msec); 0 zone resets 00:10:03.050 slat (usec): min=2, max=1757, avg=126.83, stdev=282.72 00:10:03.050 clat (usec): min=4849, max=20698, avg=16360.90, stdev=1154.76 00:10:03.050 lat (usec): min=5327, max=21392, avg=16487.73, stdev=1158.53 00:10:03.050 clat percentiles (usec): 00:10:03.050 | 1.00th=[ 9634], 5.00th=[15664], 10.00th=[15795], 20.00th=[16057], 00:10:03.050 | 30.00th=[16319], 40.00th=[16450], 50.00th=[16581], 60.00th=[16581], 00:10:03.050 | 70.00th=[16581], 80.00th=[16712], 90.00th=[16909], 95.00th=[17171], 00:10:03.050 | 99.00th=[17957], 99.50th=[19268], 99.90th=[20579], 99.95th=[20579], 00:10:03.050 | 99.99th=[20579] 00:10:03.050 bw ( KiB/s): min=14096, max=16384, per=17.55%, avg=15240.00, stdev=1617.86, samples=2 00:10:03.050 iops : min= 3524, max= 4096, avg=3810.00, stdev=404.47, samples=2 00:10:03.050 lat (msec) : 10=0.55%, 20=99.39%, 50=0.07% 00:10:03.050 cpu : usr=1.49%, sys=4.28%, ctx=2616, majf=0, minf=1 00:10:03.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:03.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.050 issued rwts: total=3584,3937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.050 job2: (groupid=0, jobs=1): err= 0: pid=2926246: Fri Nov 29 21:39:35 2024 00:10:03.050 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:10:03.050 slat (usec): min=2, max=1575, avg=135.64, stdev=297.73 00:10:03.050 clat (usec): min=16041, max=19290, avg=17500.25, stdev=440.30 00:10:03.050 lat (usec): min=16075, max=19611, avg=17635.89, stdev=455.19 00:10:03.050 clat percentiles (usec): 00:10:03.050 | 1.00th=[16319], 5.00th=[16712], 10.00th=[16909], 20.00th=[17171], 00:10:03.050 | 30.00th=[17433], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:10:03.050 | 70.00th=[17695], 80.00th=[17957], 90.00th=[17957], 95.00th=[18220], 00:10:03.050 | 99.00th=[18744], 99.50th=[18744], 99.90th=[19006], 99.95th=[19268], 00:10:03.050 | 99.99th=[19268] 00:10:03.050 write: IOPS=3908, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1005msec); 0 zone resets 00:10:03.050 slat (usec): min=2, max=2067, avg=127.42, stdev=289.55 00:10:03.050 clat (usec): min=4667, max=21205, avg=16373.93, stdev=1130.50 00:10:03.050 lat (usec): min=5280, max=21217, avg=16501.35, stdev=1132.22 00:10:03.050 clat percentiles (usec): 00:10:03.050 | 1.00th=[ 9503], 5.00th=[15664], 10.00th=[15795], 20.00th=[16057], 00:10:03.050 | 30.00th=[16319], 40.00th=[16450], 50.00th=[16450], 60.00th=[16581], 00:10:03.050 | 70.00th=[16581], 80.00th=[16712], 90.00th=[16909], 95.00th=[17171], 00:10:03.050 | 99.00th=[18220], 99.50th=[19006], 99.90th=[20579], 99.95th=[21103], 00:10:03.050 | 99.99th=[21103] 00:10:03.050 bw ( KiB/s): min=14024, max=16384, per=17.51%, avg=15204.00, stdev=1668.77, samples=2 00:10:03.050 iops : min= 3506, max= 4096, avg=3801.00, stdev=417.19, samples=2 00:10:03.050 lat (msec) : 10=0.53%, 20=99.25%, 50=0.21% 00:10:03.050 cpu : usr=2.09%, sys=3.69%, ctx=2615, majf=0, minf=1 00:10:03.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:03.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.050 issued rwts: total=3584,3928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.050 job3: (groupid=0, jobs=1): err= 0: pid=2926247: Fri Nov 29 21:39:35 2024 00:10:03.050 read: IOPS=9669, BW=37.8MiB/s (39.6MB/s)(38.0MiB/1006msec) 00:10:03.050 slat (usec): min=2, max=1001, avg=49.52, stdev=179.66 00:10:03.050 clat (usec): min=1415, max=7707, avg=6566.38, stdev=516.90 00:10:03.050 lat (usec): min=1605, max=7711, avg=6615.89, stdev=520.53 00:10:03.050 clat percentiles (usec): 00:10:03.050 | 1.00th=[ 3621], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 6456], 00:10:03.050 | 30.00th=[ 6587], 40.00th=[ 6652], 50.00th=[ 6652], 60.00th=[ 6718], 00:10:03.050 | 70.00th=[ 6718], 80.00th=[ 6783], 90.00th=[ 6915], 95.00th=[ 6980], 00:10:03.050 | 99.00th=[ 7439], 99.50th=[ 7504], 99.90th=[ 7570], 99.95th=[ 7635], 00:10:03.050 | 99.99th=[ 7701] 00:10:03.050 write: IOPS=9970, BW=38.9MiB/s (40.8MB/s)(39.2MiB/1006msec); 0 zone resets 00:10:03.050 slat (usec): min=2, max=1226, avg=48.26, stdev=172.39 00:10:03.050 clat (usec): min=4938, max=11684, avg=6329.10, stdev=442.40 00:10:03.050 lat (usec): min=4947, max=11694, avg=6377.36, stdev=441.86 00:10:03.050 clat percentiles (usec): 00:10:03.050 | 1.00th=[ 5473], 5.00th=[ 5669], 10.00th=[ 5997], 20.00th=[ 6194], 00:10:03.050 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6325], 60.00th=[ 6390], 00:10:03.050 | 70.00th=[ 6390], 80.00th=[ 6456], 90.00th=[ 6587], 95.00th=[ 6718], 00:10:03.050 | 99.00th=[ 7242], 99.50th=[ 9372], 99.90th=[11600], 99.95th=[11600], 00:10:03.050 | 99.99th=[11731] 00:10:03.050 bw ( KiB/s): min=38264, max=40960, per=45.62%, avg=39612.00, stdev=1906.36, samples=2 00:10:03.050 iops : min= 9566, max=10240, avg=9903.00, stdev=476.59, samples=2 00:10:03.050 lat (msec) : 2=0.02%, 4=0.49%, 10=99.34%, 20=0.16% 00:10:03.050 cpu : usr=3.98%, sys=7.56%, ctx=1267, majf=0, minf=2 00:10:03.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:03.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.050 issued rwts: total=9728,10030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.050 00:10:03.050 Run status group 0 (all jobs): 00:10:03.050 READ: bw=79.5MiB/s (83.4MB/s), 13.9MiB/s-37.8MiB/s (14.6MB/s-39.6MB/s), io=80.0MiB (83.9MB), run=1005-1006msec 00:10:03.050 WRITE: bw=84.8MiB/s (88.9MB/s), 15.3MiB/s-38.9MiB/s (16.0MB/s-40.8MB/s), io=85.3MiB (89.5MB), run=1005-1006msec 00:10:03.050 00:10:03.050 Disk stats (read/write): 00:10:03.050 nvme0n1: ios=3121/3170, merge=0/0, ticks=17729/17075, in_queue=34804, util=84.85% 00:10:03.050 nvme0n2: ios=3072/3175, merge=0/0, ticks=17717/17079, in_queue=34796, util=85.50% 00:10:03.050 nvme0n3: ios=3072/3167, merge=0/0, ticks=17693/17077, in_queue=34770, util=88.48% 00:10:03.050 nvme0n4: ios=8192/8237, merge=0/0, ticks=46245/43807, in_queue=90052, util=89.42% 00:10:03.050 21:39:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:03.050 21:39:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2926511 00:10:03.050 21:39:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:03.050 21:39:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:03.050 [global] 00:10:03.050 thread=1 00:10:03.050 invalidate=1 00:10:03.050 rw=read 00:10:03.050 time_based=1 00:10:03.050 runtime=10 00:10:03.050 ioengine=libaio 00:10:03.050 direct=1 00:10:03.050 bs=4096 00:10:03.050 iodepth=1 00:10:03.050 norandommap=1 00:10:03.050 numjobs=1 00:10:03.050 00:10:03.050 [job0] 00:10:03.050 filename=/dev/nvme0n1 00:10:03.050 [job1] 00:10:03.050 filename=/dev/nvme0n2 00:10:03.050 [job2] 00:10:03.050 filename=/dev/nvme0n3 00:10:03.050 [job3] 00:10:03.050 filename=/dev/nvme0n4 00:10:03.050 Could not set queue depth (nvme0n1) 00:10:03.050 Could not set queue depth (nvme0n2) 00:10:03.050 Could not set queue depth (nvme0n3) 00:10:03.050 Could not set queue depth (nvme0n4) 00:10:03.308 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.308 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.308 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.308 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.308 fio-3.35 00:10:03.308 Starting 4 threads 00:10:06.597 21:39:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:06.597 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=75395072, buflen=4096 00:10:06.597 fio: pid=2926671, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.597 21:39:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:06.597 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=80379904, buflen=4096 00:10:06.597 fio: pid=2926670, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.597 21:39:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.597 21:39:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:06.597 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=59109376, buflen=4096 00:10:06.597 fio: pid=2926668, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.597 21:39:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.597 21:39:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:06.857 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=57040896, buflen=4096 00:10:06.857 fio: pid=2926669, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.857 21:39:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.857 21:39:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:06.857 00:10:06.857 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2926668: Fri Nov 29 21:39:38 2024 00:10:06.857 read: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(120MiB/3039msec) 00:10:06.857 slat (usec): min=6, max=30649, avg=11.49, stdev=224.81 00:10:06.857 clat (usec): min=51, max=251, avg=84.88, stdev=10.07 00:10:06.857 lat (usec): min=60, max=30730, avg=96.38, stdev=225.18 00:10:06.857 clat percentiles (usec): 00:10:06.857 | 1.00th=[ 61], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 79], 00:10:06.857 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 86], 00:10:06.857 | 70.00th=[ 88], 80.00th=[ 91], 90.00th=[ 96], 95.00th=[ 105], 00:10:06.857 | 99.00th=[ 119], 99.50th=[ 122], 99.90th=[ 130], 99.95th=[ 145], 00:10:06.857 | 99.99th=[ 172] 00:10:06.857 bw ( KiB/s): min=38424, max=43312, per=34.40%, avg=41785.60, stdev=1983.64, samples=5 00:10:06.857 iops : min= 9606, max=10828, avg=10446.40, stdev=495.91, samples=5 00:10:06.857 lat (usec) : 100=93.10%, 250=6.89%, 500=0.01% 00:10:06.857 cpu : usr=5.07%, sys=13.76%, ctx=30822, majf=0, minf=1 00:10:06.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.857 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.857 issued rwts: total=30816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.857 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2926669: Fri Nov 29 21:39:38 2024 00:10:06.857 read: IOPS=9283, BW=36.3MiB/s (38.0MB/s)(118MiB/3265msec) 00:10:06.857 slat (usec): min=8, max=16962, avg=11.99, stdev=222.56 00:10:06.857 clat (usec): min=33, max=22566, avg=93.51, stdev=132.44 00:10:06.857 lat (usec): min=58, max=22575, avg=105.50, stdev=258.92 00:10:06.857 clat percentiles (usec): 00:10:06.857 | 1.00th=[ 55], 5.00th=[ 59], 10.00th=[ 68], 20.00th=[ 76], 00:10:06.857 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 85], 00:10:06.857 | 70.00th=[ 89], 80.00th=[ 114], 90.00th=[ 147], 95.00th=[ 157], 00:10:06.857 | 99.00th=[ 172], 99.50th=[ 188], 99.90th=[ 212], 99.95th=[ 215], 00:10:06.857 | 99.99th=[ 221] 00:10:06.857 bw ( KiB/s): min=24664, max=44000, per=30.07%, avg=36530.33, stdev=7375.18, samples=6 00:10:06.857 iops : min= 6166, max=11000, avg=9132.50, stdev=1843.85, samples=6 00:10:06.857 lat (usec) : 50=0.02%, 100=75.90%, 250=24.08% 00:10:06.857 lat (msec) : 2=0.01%, 50=0.01% 00:10:06.857 cpu : usr=4.69%, sys=12.56%, ctx=30317, majf=0, minf=2 00:10:06.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.857 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.857 issued rwts: total=30311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.857 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2926670: Fri Nov 29 21:39:38 2024 00:10:06.857 read: IOPS=6912, BW=27.0MiB/s (28.3MB/s)(76.7MiB/2839msec) 00:10:06.857 slat (usec): min=6, max=15906, avg=10.75, stdev=136.81 00:10:06.857 clat (usec): min=69, max=311, avg=131.23, stdev=22.14 00:10:06.857 lat (usec): min=78, max=16003, avg=141.98, stdev=138.22 00:10:06.857 clat percentiles (usec): 00:10:06.857 | 1.00th=[ 79], 5.00th=[ 85], 10.00th=[ 96], 20.00th=[ 120], 00:10:06.857 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:10:06.857 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 165], 00:10:06.857 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 208], 99.95th=[ 215], 00:10:06.857 | 99.99th=[ 225] 00:10:06.857 bw ( KiB/s): min=24664, max=28656, per=22.52%, avg=27353.60, stdev=1637.55, samples=5 00:10:06.857 iops : min= 6166, max= 7164, avg=6838.40, stdev=409.39, samples=5 00:10:06.857 lat (usec) : 100=11.04%, 250=88.95%, 500=0.01% 00:10:06.857 cpu : usr=3.42%, sys=9.97%, ctx=19628, majf=0, minf=2 00:10:06.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.857 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.857 issued rwts: total=19625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.857 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2926671: Fri Nov 29 21:39:38 2024 00:10:06.857 read: IOPS=7017, BW=27.4MiB/s (28.7MB/s)(71.9MiB/2623msec) 00:10:06.857 slat (nsec): min=8441, max=36546, avg=9196.61, stdev=823.94 00:10:06.857 clat (usec): min=70, max=315, avg=131.57, stdev=22.41 00:10:06.857 lat (usec): min=79, max=325, avg=140.77, stdev=22.43 00:10:06.857 clat percentiles (usec): 00:10:06.857 | 1.00th=[ 81], 5.00th=[ 87], 10.00th=[ 94], 20.00th=[ 120], 00:10:06.857 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:10:06.857 | 70.00th=[ 141], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 167], 00:10:06.857 | 99.00th=[ 186], 99.50th=[ 196], 99.90th=[ 208], 99.95th=[ 210], 00:10:06.857 | 99.99th=[ 229] 00:10:06.857 bw ( KiB/s): min=24672, max=30168, per=22.77%, avg=27657.60, stdev=2028.89, samples=5 00:10:06.857 iops : min= 6168, max= 7542, avg=6914.40, stdev=507.22, samples=5 00:10:06.857 lat (usec) : 100=12.58%, 250=87.41%, 500=0.01% 00:10:06.857 cpu : usr=3.13%, sys=10.30%, ctx=18408, majf=0, minf=2 00:10:06.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.857 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.857 issued rwts: total=18408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.857 00:10:06.857 Run status group 0 (all jobs): 00:10:06.857 READ: bw=119MiB/s (124MB/s), 27.0MiB/s-39.6MiB/s (28.3MB/s-41.5MB/s), io=387MiB (406MB), run=2623-3265msec 00:10:06.857 00:10:06.857 Disk stats (read/write): 00:10:06.857 nvme0n1: ios=29077/0, merge=0/0, ticks=2266/0, in_queue=2266, util=93.42% 00:10:06.857 nvme0n2: ios=28032/0, merge=0/0, ticks=2413/0, in_queue=2413, util=92.87% 00:10:06.857 nvme0n3: ios=19625/0, merge=0/0, ticks=2435/0, in_queue=2435, util=95.10% 00:10:06.857 nvme0n4: ios=18091/0, merge=0/0, ticks=2245/0, in_queue=2245, util=96.45% 00:10:07.117 21:39:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.117 21:39:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:07.376 21:39:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.376 21:39:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:07.635 21:39:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.635 21:39:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:07.635 21:39:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.635 21:39:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:07.894 21:39:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:07.894 21:39:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2926511 00:10:07.894 21:39:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:07.894 21:39:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:08.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.831 21:39:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:08.831 21:39:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:08.831 21:39:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:08.831 21:39:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.831 21:39:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:08.831 21:39:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.831 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:08.831 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:08.831 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:08.831 nvmf hotplug test: fio failed as expected 00:10:08.831 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:09.089 rmmod nvme_rdma 00:10:09.089 rmmod nvme_fabrics 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 2923438 ']' 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 2923438 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2923438 ']' 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2923438 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.089 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2923438 00:10:09.348 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:09.348 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:09.348 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2923438' 00:10:09.348 killing process with pid 2923438 00:10:09.348 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2923438 00:10:09.348 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2923438 00:10:09.607 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:09.607 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:10:09.607 00:10:09.607 real 0m26.627s 00:10:09.607 user 2m7.970s 00:10:09.607 sys 0m10.301s 00:10:09.607 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.607 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.607 ************************************ 00:10:09.607 END TEST nvmf_fio_target 00:10:09.607 ************************************ 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.608 ************************************ 00:10:09.608 START TEST nvmf_bdevio 00:10:09.608 ************************************ 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:09.608 * Looking for test storage... 00:10:09.608 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.608 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.867 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:09.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.868 --rc genhtml_branch_coverage=1 00:10:09.868 --rc genhtml_function_coverage=1 00:10:09.868 --rc genhtml_legend=1 00:10:09.868 --rc geninfo_all_blocks=1 00:10:09.868 --rc geninfo_unexecuted_blocks=1 00:10:09.868 00:10:09.868 ' 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:09.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.868 --rc genhtml_branch_coverage=1 00:10:09.868 --rc genhtml_function_coverage=1 00:10:09.868 --rc genhtml_legend=1 00:10:09.868 --rc geninfo_all_blocks=1 00:10:09.868 --rc geninfo_unexecuted_blocks=1 00:10:09.868 00:10:09.868 ' 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:09.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.868 --rc genhtml_branch_coverage=1 00:10:09.868 --rc genhtml_function_coverage=1 00:10:09.868 --rc genhtml_legend=1 00:10:09.868 --rc geninfo_all_blocks=1 00:10:09.868 --rc geninfo_unexecuted_blocks=1 00:10:09.868 00:10:09.868 ' 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:09.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.868 --rc genhtml_branch_coverage=1 00:10:09.868 --rc genhtml_function_coverage=1 00:10:09.868 --rc genhtml_legend=1 00:10:09.868 --rc geninfo_all_blocks=1 00:10:09.868 --rc geninfo_unexecuted_blocks=1 00:10:09.868 00:10:09.868 ' 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.868 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.869 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:09.869 21:39:41 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:16.449 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:16.449 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:16.450 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:16.450 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:16.450 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # rdma_device_init 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@526 -- # allocate_nic_ips 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:16.450 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:16.450 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:16.450 altname enp217s0f0np0 00:10:16.450 altname ens818f0np0 00:10:16.450 inet 192.168.100.8/24 scope global mlx_0_0 00:10:16.450 valid_lft forever preferred_lft forever 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:16.450 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:16.451 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:16.451 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:16.451 altname enp217s0f1np1 00:10:16.451 altname ens818f1np1 00:10:16.451 inet 192.168.100.9/24 scope global mlx_0_1 00:10:16.451 valid_lft forever preferred_lft forever 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:10:16.451 192.168.100.9' 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:10:16.451 192.168.100.9' 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # head -n 1 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:10:16.451 192.168.100.9' 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # tail -n +2 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # head -n 1 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=2930963 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 2930963 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2930963 ']' 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.451 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:16.451 [2024-11-29 21:39:48.323509] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:16.451 [2024-11-29 21:39:48.323568] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.451 [2024-11-29 21:39:48.395785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.451 [2024-11-29 21:39:48.435540] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.451 [2024-11-29 21:39:48.435581] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.451 [2024-11-29 21:39:48.435590] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.451 [2024-11-29 21:39:48.435598] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.451 [2024-11-29 21:39:48.435605] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.451 [2024-11-29 21:39:48.435726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.451 [2024-11-29 21:39:48.435840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:16.451 [2024-11-29 21:39:48.435948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.451 [2024-11-29 21:39:48.435949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:16.452 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.452 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:16.452 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:16.452 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:16.452 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.452 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.452 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:16.452 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.452 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.452 [2024-11-29 21:39:48.612265] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12f0850/0x12f4d00) succeed. 00:10:16.452 [2024-11-29 21:39:48.622900] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12f1e40/0x13363a0) succeed. 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.711 Malloc0 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.711 [2024-11-29 21:39:48.781280] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:16.711 { 00:10:16.711 "params": { 00:10:16.711 "name": "Nvme$subsystem", 00:10:16.711 "trtype": "$TEST_TRANSPORT", 00:10:16.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.711 "adrfam": "ipv4", 00:10:16.711 "trsvcid": "$NVMF_PORT", 00:10:16.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.711 "hdgst": ${hdgst:-false}, 00:10:16.711 "ddgst": ${ddgst:-false} 00:10:16.711 }, 00:10:16.711 "method": "bdev_nvme_attach_controller" 00:10:16.711 } 00:10:16.711 EOF 00:10:16.711 )") 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:10:16.711 21:39:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:16.711 "params": { 00:10:16.711 "name": "Nvme1", 00:10:16.711 "trtype": "rdma", 00:10:16.711 "traddr": "192.168.100.8", 00:10:16.711 "adrfam": "ipv4", 00:10:16.711 "trsvcid": "4420", 00:10:16.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:16.711 "hdgst": false, 00:10:16.711 "ddgst": false 00:10:16.711 }, 00:10:16.711 "method": "bdev_nvme_attach_controller" 00:10:16.711 }' 00:10:16.711 [2024-11-29 21:39:48.828116] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:16.711 [2024-11-29 21:39:48.828166] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2931213 ] 00:10:16.711 [2024-11-29 21:39:48.899003] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.711 [2024-11-29 21:39:48.940070] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.711 [2024-11-29 21:39:48.940166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.711 [2024-11-29 21:39:48.940169] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.970 I/O targets: 00:10:16.970 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:16.970 00:10:16.970 00:10:16.970 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.970 http://cunit.sourceforge.net/ 00:10:16.970 00:10:16.970 00:10:16.970 Suite: bdevio tests on: Nvme1n1 00:10:16.970 Test: blockdev write read block ...passed 00:10:16.970 Test: blockdev write zeroes read block ...passed 00:10:16.970 Test: blockdev write zeroes read no split ...passed 00:10:16.970 Test: blockdev write zeroes read split ...passed 00:10:16.970 Test: blockdev write zeroes read split partial ...passed 00:10:16.970 Test: blockdev reset ...[2024-11-29 21:39:49.140315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:16.970 [2024-11-29 21:39:49.163162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:10:16.971 [2024-11-29 21:39:49.190077] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:16.971 passed 00:10:16.971 Test: blockdev write read 8 blocks ...passed 00:10:16.971 Test: blockdev write read size > 128k ...passed 00:10:16.971 Test: blockdev write read invalid size ...passed 00:10:16.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:16.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:16.971 Test: blockdev write read max offset ...passed 00:10:16.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:16.971 Test: blockdev writev readv 8 blocks ...passed 00:10:16.971 Test: blockdev writev readv 30 x 1block ...passed 00:10:16.971 Test: blockdev writev readv block ...passed 00:10:16.971 Test: blockdev writev readv size > 128k ...passed 00:10:16.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:16.971 Test: blockdev comparev and writev ...[2024-11-29 21:39:49.192984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.971 [2024-11-29 21:39:49.193014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:16.971 [2024-11-29 21:39:49.193026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.971 [2024-11-29 21:39:49.193036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:16.971 [2024-11-29 21:39:49.193194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.971 [2024-11-29 21:39:49.193206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:16.971 [2024-11-29 21:39:49.193216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.971 [2024-11-29 21:39:49.193226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:16.971 [2024-11-29 21:39:49.193412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.971 [2024-11-29 21:39:49.193423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:16.971 [2024-11-29 21:39:49.193433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.971 [2024-11-29 21:39:49.193442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:16.971 [2024-11-29 21:39:49.193607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.971 [2024-11-29 21:39:49.193618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:16.971 [2024-11-29 21:39:49.193628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.971 [2024-11-29 21:39:49.193638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:16.971 passed 00:10:16.971 Test: blockdev nvme passthru rw ...passed 00:10:16.971 Test: blockdev nvme passthru vendor specific ...[2024-11-29 21:39:49.193907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:16.971 [2024-11-29 21:39:49.193921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:16.971 [2024-11-29 21:39:49.193958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:16.971 [2024-11-29 21:39:49.193969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:16.971 [2024-11-29 21:39:49.194007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:16.971 [2024-11-29 21:39:49.194017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:16.971 [2024-11-29 21:39:49.194058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:16.971 [2024-11-29 21:39:49.194069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:16.971 passed 00:10:16.971 Test: blockdev nvme admin passthru ...passed 00:10:16.971 Test: blockdev copy ...passed 00:10:16.971 00:10:16.971 Run Summary: Type Total Ran Passed Failed Inactive 00:10:16.971 suites 1 1 n/a 0 0 00:10:16.971 tests 23 23 23 0 0 00:10:16.971 asserts 152 152 152 0 n/a 00:10:16.971 00:10:16.971 Elapsed time = 0.172 seconds 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:17.230 rmmod nvme_rdma 00:10:17.230 rmmod nvme_fabrics 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 2930963 ']' 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 2930963 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2930963 ']' 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2930963 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:17.230 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2930963 00:10:17.489 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:17.489 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:17.490 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2930963' 00:10:17.490 killing process with pid 2930963 00:10:17.490 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2930963 00:10:17.490 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2930963 00:10:17.750 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:17.750 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:10:17.750 00:10:17.750 real 0m8.090s 00:10:17.750 user 0m7.842s 00:10:17.750 sys 0m5.477s 00:10:17.750 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.750 21:39:49 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.750 ************************************ 00:10:17.750 END TEST nvmf_bdevio 00:10:17.750 ************************************ 00:10:17.750 21:39:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:17.750 00:10:17.750 real 4m7.573s 00:10:17.750 user 10m46.425s 00:10:17.750 sys 1m35.434s 00:10:17.750 21:39:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.750 21:39:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.750 ************************************ 00:10:17.750 END TEST nvmf_target_core 00:10:17.750 ************************************ 00:10:17.750 21:39:49 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:17.750 21:39:49 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:17.750 21:39:49 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.750 21:39:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:17.750 ************************************ 00:10:17.750 START TEST nvmf_target_extra 00:10:17.750 ************************************ 00:10:17.750 21:39:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:17.750 * Looking for test storage... 00:10:17.750 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:10:17.750 21:39:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:17.750 21:39:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:17.750 21:39:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:18.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.009 --rc genhtml_branch_coverage=1 00:10:18.009 --rc genhtml_function_coverage=1 00:10:18.009 --rc genhtml_legend=1 00:10:18.009 --rc geninfo_all_blocks=1 00:10:18.009 --rc geninfo_unexecuted_blocks=1 00:10:18.009 00:10:18.009 ' 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:18.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.009 --rc genhtml_branch_coverage=1 00:10:18.009 --rc genhtml_function_coverage=1 00:10:18.009 --rc genhtml_legend=1 00:10:18.009 --rc geninfo_all_blocks=1 00:10:18.009 --rc geninfo_unexecuted_blocks=1 00:10:18.009 00:10:18.009 ' 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:18.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.009 --rc genhtml_branch_coverage=1 00:10:18.009 --rc genhtml_function_coverage=1 00:10:18.009 --rc genhtml_legend=1 00:10:18.009 --rc geninfo_all_blocks=1 00:10:18.009 --rc geninfo_unexecuted_blocks=1 00:10:18.009 00:10:18.009 ' 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:18.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.009 --rc genhtml_branch_coverage=1 00:10:18.009 --rc genhtml_function_coverage=1 00:10:18.009 --rc genhtml_legend=1 00:10:18.009 --rc geninfo_all_blocks=1 00:10:18.009 --rc geninfo_unexecuted_blocks=1 00:10:18.009 00:10:18.009 ' 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.009 21:39:50 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.010 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:18.010 ************************************ 00:10:18.010 START TEST nvmf_example 00:10:18.010 ************************************ 00:10:18.010 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:18.268 * Looking for test storage... 00:10:18.268 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:18.268 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:18.268 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:10:18.268 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:18.268 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:18.268 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.268 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.268 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.268 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.268 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.268 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.268 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.268 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:18.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.269 --rc genhtml_branch_coverage=1 00:10:18.269 --rc genhtml_function_coverage=1 00:10:18.269 --rc genhtml_legend=1 00:10:18.269 --rc geninfo_all_blocks=1 00:10:18.269 --rc geninfo_unexecuted_blocks=1 00:10:18.269 00:10:18.269 ' 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:18.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.269 --rc genhtml_branch_coverage=1 00:10:18.269 --rc genhtml_function_coverage=1 00:10:18.269 --rc genhtml_legend=1 00:10:18.269 --rc geninfo_all_blocks=1 00:10:18.269 --rc geninfo_unexecuted_blocks=1 00:10:18.269 00:10:18.269 ' 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:18.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.269 --rc genhtml_branch_coverage=1 00:10:18.269 --rc genhtml_function_coverage=1 00:10:18.269 --rc genhtml_legend=1 00:10:18.269 --rc geninfo_all_blocks=1 00:10:18.269 --rc geninfo_unexecuted_blocks=1 00:10:18.269 00:10:18.269 ' 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:18.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.269 --rc genhtml_branch_coverage=1 00:10:18.269 --rc genhtml_function_coverage=1 00:10:18.269 --rc genhtml_legend=1 00:10:18.269 --rc geninfo_all_blocks=1 00:10:18.269 --rc geninfo_unexecuted_blocks=1 00:10:18.269 00:10:18.269 ' 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.269 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.270 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:18.270 21:39:50 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:24.836 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:24.837 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:24.837 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:24.837 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:24.837 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # rdma_device_init 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@526 -- # allocate_nic_ips 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:24.837 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:24.837 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:24.837 altname enp217s0f0np0 00:10:24.837 altname ens818f0np0 00:10:24.837 inet 192.168.100.8/24 scope global mlx_0_0 00:10:24.837 valid_lft forever preferred_lft forever 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:24.837 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:24.837 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:24.837 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:24.838 altname enp217s0f1np1 00:10:24.838 altname ens818f1np1 00:10:24.838 inet 192.168.100.9/24 scope global mlx_0_1 00:10:24.838 valid_lft forever preferred_lft forever 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:10:24.838 192.168.100.9' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:10:24.838 192.168.100.9' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # head -n 1 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:10:24.838 192.168.100.9' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # head -n 1 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # tail -n +2 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2934715 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2934715 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2934715 ']' 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.838 21:39:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.805 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.805 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:25.805 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:25.805 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.806 21:39:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:25.806 21:39:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.806 21:39:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:25.806 21:39:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:38.025 Initializing NVMe Controllers 00:10:38.025 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:38.025 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:38.025 Initialization complete. Launching workers. 00:10:38.025 ======================================================== 00:10:38.025 Latency(us) 00:10:38.025 Device Information : IOPS MiB/s Average min max 00:10:38.025 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26585.60 103.85 2407.79 619.12 13006.77 00:10:38.025 ======================================================== 00:10:38.025 Total : 26585.60 103.85 2407.79 619.12 13006.77 00:10:38.025 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:38.025 rmmod nvme_rdma 00:10:38.025 rmmod nvme_fabrics 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 2934715 ']' 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 2934715 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2934715 ']' 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2934715 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2934715 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2934715' 00:10:38.025 killing process with pid 2934715 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2934715 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2934715 00:10:38.025 nvmf threads initialize successfully 00:10:38.025 bdev subsystem init successfully 00:10:38.025 created a nvmf target service 00:10:38.025 create targets's poll groups done 00:10:38.025 all subsystems of target started 00:10:38.025 nvmf target is running 00:10:38.025 all subsystems of target stopped 00:10:38.025 destroy targets's poll groups done 00:10:38.025 destroyed the nvmf target service 00:10:38.025 bdev subsystem finish successfully 00:10:38.025 nvmf threads destroy successfully 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.025 00:10:38.025 real 0m19.482s 00:10:38.025 user 0m52.270s 00:10:38.025 sys 0m5.502s 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.025 ************************************ 00:10:38.025 END TEST nvmf_example 00:10:38.025 ************************************ 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:38.025 ************************************ 00:10:38.025 START TEST nvmf_filesystem 00:10:38.025 ************************************ 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:38.025 * Looking for test storage... 00:10:38.025 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.025 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:38.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.026 --rc genhtml_branch_coverage=1 00:10:38.026 --rc genhtml_function_coverage=1 00:10:38.026 --rc genhtml_legend=1 00:10:38.026 --rc geninfo_all_blocks=1 00:10:38.026 --rc geninfo_unexecuted_blocks=1 00:10:38.026 00:10:38.026 ' 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:38.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.026 --rc genhtml_branch_coverage=1 00:10:38.026 --rc genhtml_function_coverage=1 00:10:38.026 --rc genhtml_legend=1 00:10:38.026 --rc geninfo_all_blocks=1 00:10:38.026 --rc geninfo_unexecuted_blocks=1 00:10:38.026 00:10:38.026 ' 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:38.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.026 --rc genhtml_branch_coverage=1 00:10:38.026 --rc genhtml_function_coverage=1 00:10:38.026 --rc genhtml_legend=1 00:10:38.026 --rc geninfo_all_blocks=1 00:10:38.026 --rc geninfo_unexecuted_blocks=1 00:10:38.026 00:10:38.026 ' 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:38.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.026 --rc genhtml_branch_coverage=1 00:10:38.026 --rc genhtml_function_coverage=1 00:10:38.026 --rc genhtml_legend=1 00:10:38.026 --rc geninfo_all_blocks=1 00:10:38.026 --rc geninfo_unexecuted_blocks=1 00:10:38.026 00:10:38.026 ' 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:38.026 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:38.027 #define SPDK_CONFIG_H 00:10:38.027 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:38.027 #define SPDK_CONFIG_APPS 1 00:10:38.027 #define SPDK_CONFIG_ARCH native 00:10:38.027 #undef SPDK_CONFIG_ASAN 00:10:38.027 #undef SPDK_CONFIG_AVAHI 00:10:38.027 #undef SPDK_CONFIG_CET 00:10:38.027 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:38.027 #define SPDK_CONFIG_COVERAGE 1 00:10:38.027 #define SPDK_CONFIG_CROSS_PREFIX 00:10:38.027 #undef SPDK_CONFIG_CRYPTO 00:10:38.027 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:38.027 #undef SPDK_CONFIG_CUSTOMOCF 00:10:38.027 #undef SPDK_CONFIG_DAOS 00:10:38.027 #define SPDK_CONFIG_DAOS_DIR 00:10:38.027 #define SPDK_CONFIG_DEBUG 1 00:10:38.027 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:38.027 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:10:38.027 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:10:38.027 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:10:38.027 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:38.027 #undef SPDK_CONFIG_DPDK_UADK 00:10:38.027 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:38.027 #define SPDK_CONFIG_EXAMPLES 1 00:10:38.027 #undef SPDK_CONFIG_FC 00:10:38.027 #define SPDK_CONFIG_FC_PATH 00:10:38.027 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:38.027 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:38.027 #define SPDK_CONFIG_FSDEV 1 00:10:38.027 #undef SPDK_CONFIG_FUSE 00:10:38.027 #undef SPDK_CONFIG_FUZZER 00:10:38.027 #define SPDK_CONFIG_FUZZER_LIB 00:10:38.027 #undef SPDK_CONFIG_GOLANG 00:10:38.027 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:38.027 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:38.027 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:38.027 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:38.027 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:38.027 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:38.027 #undef SPDK_CONFIG_HAVE_LZ4 00:10:38.027 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:38.027 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:38.027 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:38.027 #define SPDK_CONFIG_IDXD 1 00:10:38.027 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:38.027 #undef SPDK_CONFIG_IPSEC_MB 00:10:38.027 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:38.027 #define SPDK_CONFIG_ISAL 1 00:10:38.027 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:38.027 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:38.027 #define SPDK_CONFIG_LIBDIR 00:10:38.027 #undef SPDK_CONFIG_LTO 00:10:38.027 #define SPDK_CONFIG_MAX_LCORES 128 00:10:38.027 #define SPDK_CONFIG_NVME_CUSE 1 00:10:38.027 #undef SPDK_CONFIG_OCF 00:10:38.027 #define SPDK_CONFIG_OCF_PATH 00:10:38.027 #define SPDK_CONFIG_OPENSSL_PATH 00:10:38.027 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:38.027 #define SPDK_CONFIG_PGO_DIR 00:10:38.027 #undef SPDK_CONFIG_PGO_USE 00:10:38.027 #define SPDK_CONFIG_PREFIX /usr/local 00:10:38.027 #undef SPDK_CONFIG_RAID5F 00:10:38.027 #undef SPDK_CONFIG_RBD 00:10:38.027 #define SPDK_CONFIG_RDMA 1 00:10:38.027 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:38.027 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:38.027 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:38.027 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:38.027 #define SPDK_CONFIG_SHARED 1 00:10:38.027 #undef SPDK_CONFIG_SMA 00:10:38.027 #define SPDK_CONFIG_TESTS 1 00:10:38.027 #undef SPDK_CONFIG_TSAN 00:10:38.027 #define SPDK_CONFIG_UBLK 1 00:10:38.027 #define SPDK_CONFIG_UBSAN 1 00:10:38.027 #undef SPDK_CONFIG_UNIT_TESTS 00:10:38.027 #undef SPDK_CONFIG_URING 00:10:38.027 #define SPDK_CONFIG_URING_PATH 00:10:38.027 #undef SPDK_CONFIG_URING_ZNS 00:10:38.027 #undef SPDK_CONFIG_USDT 00:10:38.027 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:38.027 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:38.027 #undef SPDK_CONFIG_VFIO_USER 00:10:38.027 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:38.027 #define SPDK_CONFIG_VHOST 1 00:10:38.027 #define SPDK_CONFIG_VIRTIO 1 00:10:38.027 #undef SPDK_CONFIG_VTUNE 00:10:38.027 #define SPDK_CONFIG_VTUNE_DIR 00:10:38.027 #define SPDK_CONFIG_WERROR 1 00:10:38.027 #define SPDK_CONFIG_WPDK_DIR 00:10:38.027 #undef SPDK_CONFIG_XNVME 00:10:38.027 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:38.027 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:38.028 21:40:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:38.028 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:38.028 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:38.028 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:38.028 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:38.028 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:10:38.028 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:38.028 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:38.028 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:38.028 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:38.029 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=rdma 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2936944 ]] 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2936944 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.raAf3U 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.raAf3U/tests/target /tmp/spdk.raAf3U 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=53691568128 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61730590720 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=8039022592 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30803623936 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865293312 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=61669376 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12323024896 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346118144 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23093248 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30864273408 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865297408 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1024000 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6173044736 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173057024 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:38.030 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:38.030 * Looking for test storage... 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=53691568128 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=10253615104 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:38.031 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:38.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.031 --rc genhtml_branch_coverage=1 00:10:38.031 --rc genhtml_function_coverage=1 00:10:38.031 --rc genhtml_legend=1 00:10:38.031 --rc geninfo_all_blocks=1 00:10:38.031 --rc geninfo_unexecuted_blocks=1 00:10:38.031 00:10:38.031 ' 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:38.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.031 --rc genhtml_branch_coverage=1 00:10:38.031 --rc genhtml_function_coverage=1 00:10:38.031 --rc genhtml_legend=1 00:10:38.031 --rc geninfo_all_blocks=1 00:10:38.031 --rc geninfo_unexecuted_blocks=1 00:10:38.031 00:10:38.031 ' 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:38.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.031 --rc genhtml_branch_coverage=1 00:10:38.031 --rc genhtml_function_coverage=1 00:10:38.031 --rc genhtml_legend=1 00:10:38.031 --rc geninfo_all_blocks=1 00:10:38.031 --rc geninfo_unexecuted_blocks=1 00:10:38.031 00:10:38.031 ' 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:38.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.031 --rc genhtml_branch_coverage=1 00:10:38.031 --rc genhtml_function_coverage=1 00:10:38.031 --rc genhtml_legend=1 00:10:38.031 --rc geninfo_all_blocks=1 00:10:38.031 --rc geninfo_unexecuted_blocks=1 00:10:38.031 00:10:38.031 ' 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.031 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.032 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.032 21:40:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:44.602 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:44.602 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:44.602 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.602 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:44.603 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # rdma_device_init 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@526 -- # allocate_nic_ips 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:44.603 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:44.863 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:44.863 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:44.863 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:44.863 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:44.863 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:44.863 altname enp217s0f0np0 00:10:44.863 altname ens818f0np0 00:10:44.863 inet 192.168.100.8/24 scope global mlx_0_0 00:10:44.863 valid_lft forever preferred_lft forever 00:10:44.863 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:44.863 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:44.863 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:44.863 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:44.863 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:44.863 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:44.863 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:44.863 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:44.863 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:44.863 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:44.863 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:44.864 altname enp217s0f1np1 00:10:44.864 altname ens818f1np1 00:10:44.864 inet 192.168.100.9/24 scope global mlx_0_1 00:10:44.864 valid_lft forever preferred_lft forever 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:10:44.864 192.168.100.9' 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # head -n 1 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:10:44.864 192.168.100.9' 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:10:44.864 192.168.100.9' 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # tail -n +2 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # head -n 1 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:44.864 21:40:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:44.864 ************************************ 00:10:44.864 START TEST nvmf_filesystem_no_in_capsule 00:10:44.864 ************************************ 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=2940362 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 2940362 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2940362 ']' 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.864 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:44.864 [2024-11-29 21:40:17.074740] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:44.864 [2024-11-29 21:40:17.074785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.124 [2024-11-29 21:40:17.145828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.124 [2024-11-29 21:40:17.185684] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.124 [2024-11-29 21:40:17.185726] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.124 [2024-11-29 21:40:17.185736] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.124 [2024-11-29 21:40:17.185744] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.124 [2024-11-29 21:40:17.185751] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.124 [2024-11-29 21:40:17.185794] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.124 [2024-11-29 21:40:17.185893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.124 [2024-11-29 21:40:17.185986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.125 [2024-11-29 21:40:17.185988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.125 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:45.125 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:45.125 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:45.125 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:45.125 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.125 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.125 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:45.125 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:10:45.125 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.125 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.125 [2024-11-29 21:40:17.334260] rdma.c:2737:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:10:45.125 [2024-11-29 21:40:17.357903] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2421f50/0x2426400) succeed. 00:10:45.125 [2024-11-29 21:40:17.368703] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2423540/0x2467aa0) succeed. 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.385 Malloc1 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.385 [2024-11-29 21:40:17.608852] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.385 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.644 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.644 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:45.644 { 00:10:45.644 "name": "Malloc1", 00:10:45.644 "aliases": [ 00:10:45.644 "3ee961e8-ac72-4da3-9127-fa19491f05d0" 00:10:45.644 ], 00:10:45.644 "product_name": "Malloc disk", 00:10:45.644 "block_size": 512, 00:10:45.644 "num_blocks": 1048576, 00:10:45.644 "uuid": "3ee961e8-ac72-4da3-9127-fa19491f05d0", 00:10:45.644 "assigned_rate_limits": { 00:10:45.644 "rw_ios_per_sec": 0, 00:10:45.644 "rw_mbytes_per_sec": 0, 00:10:45.644 "r_mbytes_per_sec": 0, 00:10:45.644 "w_mbytes_per_sec": 0 00:10:45.644 }, 00:10:45.644 "claimed": true, 00:10:45.644 "claim_type": "exclusive_write", 00:10:45.644 "zoned": false, 00:10:45.644 "supported_io_types": { 00:10:45.644 "read": true, 00:10:45.644 "write": true, 00:10:45.644 "unmap": true, 00:10:45.644 "flush": true, 00:10:45.644 "reset": true, 00:10:45.644 "nvme_admin": false, 00:10:45.644 "nvme_io": false, 00:10:45.644 "nvme_io_md": false, 00:10:45.644 "write_zeroes": true, 00:10:45.644 "zcopy": true, 00:10:45.644 "get_zone_info": false, 00:10:45.644 "zone_management": false, 00:10:45.644 "zone_append": false, 00:10:45.644 "compare": false, 00:10:45.644 "compare_and_write": false, 00:10:45.644 "abort": true, 00:10:45.644 "seek_hole": false, 00:10:45.644 "seek_data": false, 00:10:45.644 "copy": true, 00:10:45.644 "nvme_iov_md": false 00:10:45.644 }, 00:10:45.644 "memory_domains": [ 00:10:45.644 { 00:10:45.644 "dma_device_id": "system", 00:10:45.644 "dma_device_type": 1 00:10:45.644 }, 00:10:45.644 { 00:10:45.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.644 "dma_device_type": 2 00:10:45.644 } 00:10:45.644 ], 00:10:45.644 "driver_specific": {} 00:10:45.645 } 00:10:45.645 ]' 00:10:45.645 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:45.645 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:45.645 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:45.645 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:45.645 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:45.645 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:45.645 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:45.645 21:40:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:46.582 21:40:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:46.582 21:40:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:46.582 21:40:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.582 21:40:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:46.582 21:40:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:48.574 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:48.575 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:48.575 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:48.575 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:48.834 21:40:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:49.771 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:49.771 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:49.771 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:49.771 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.771 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.771 ************************************ 00:10:49.771 START TEST filesystem_ext4 00:10:49.771 ************************************ 00:10:49.771 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:49.771 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:49.771 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:49.771 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:49.771 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:49.772 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:49.772 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:49.772 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:49.772 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:49.772 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:49.772 21:40:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:49.772 mke2fs 1.47.0 (5-Feb-2023) 00:10:50.032 Discarding device blocks: 0/522240 done 00:10:50.032 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:50.032 Filesystem UUID: a89e71ef-f83f-40b5-aa9a-1b41b6707853 00:10:50.032 Superblock backups stored on blocks: 00:10:50.032 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:50.032 00:10:50.032 Allocating group tables: 0/64 done 00:10:50.032 Writing inode tables: 0/64 done 00:10:50.032 Creating journal (8192 blocks): done 00:10:50.032 Writing superblocks and filesystem accounting information: 0/64 done 00:10:50.032 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2940362 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:50.032 00:10:50.032 real 0m0.203s 00:10:50.032 user 0m0.025s 00:10:50.032 sys 0m0.079s 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:50.032 ************************************ 00:10:50.032 END TEST filesystem_ext4 00:10:50.032 ************************************ 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.032 ************************************ 00:10:50.032 START TEST filesystem_btrfs 00:10:50.032 ************************************ 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:50.032 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:50.292 btrfs-progs v6.8.1 00:10:50.292 See https://btrfs.readthedocs.io for more information. 00:10:50.292 00:10:50.292 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:50.292 NOTE: several default settings have changed in version 5.15, please make sure 00:10:50.292 this does not affect your deployments: 00:10:50.292 - DUP for metadata (-m dup) 00:10:50.292 - enabled no-holes (-O no-holes) 00:10:50.292 - enabled free-space-tree (-R free-space-tree) 00:10:50.292 00:10:50.292 Label: (null) 00:10:50.292 UUID: afa78778-a13e-47eb-9d68-32df4715882d 00:10:50.292 Node size: 16384 00:10:50.292 Sector size: 4096 (CPU page size: 4096) 00:10:50.292 Filesystem size: 510.00MiB 00:10:50.292 Block group profiles: 00:10:50.292 Data: single 8.00MiB 00:10:50.292 Metadata: DUP 32.00MiB 00:10:50.292 System: DUP 8.00MiB 00:10:50.292 SSD detected: yes 00:10:50.292 Zoned device: no 00:10:50.292 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:50.292 Checksum: crc32c 00:10:50.292 Number of devices: 1 00:10:50.292 Devices: 00:10:50.292 ID SIZE PATH 00:10:50.292 1 510.00MiB /dev/nvme0n1p1 00:10:50.292 00:10:50.292 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:50.292 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:50.292 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:50.292 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:50.292 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:50.292 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:50.292 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:50.292 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:50.292 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2940362 00:10:50.292 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:50.292 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:50.293 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:50.293 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:50.293 00:10:50.293 real 0m0.250s 00:10:50.293 user 0m0.028s 00:10:50.293 sys 0m0.132s 00:10:50.293 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.293 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:50.293 ************************************ 00:10:50.293 END TEST filesystem_btrfs 00:10:50.293 ************************************ 00:10:50.552 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:50.552 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:50.552 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.553 ************************************ 00:10:50.553 START TEST filesystem_xfs 00:10:50.553 ************************************ 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:50.553 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:50.553 = sectsz=512 attr=2, projid32bit=1 00:10:50.553 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:50.553 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:50.553 data = bsize=4096 blocks=130560, imaxpct=25 00:10:50.553 = sunit=0 swidth=0 blks 00:10:50.553 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:50.553 log =internal log bsize=4096 blocks=16384, version=2 00:10:50.553 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:50.553 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:50.553 Discarding blocks...Done. 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2940362 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:50.553 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:50.812 00:10:50.813 real 0m0.218s 00:10:50.813 user 0m0.026s 00:10:50.813 sys 0m0.084s 00:10:50.813 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.813 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:50.813 ************************************ 00:10:50.813 END TEST filesystem_xfs 00:10:50.813 ************************************ 00:10:50.813 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:50.813 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:50.813 21:40:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2940362 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2940362 ']' 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2940362 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2940362 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2940362' 00:10:51.750 killing process with pid 2940362 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2940362 00:10:51.750 21:40:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2940362 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:52.319 00:10:52.319 real 0m7.320s 00:10:52.319 user 0m28.538s 00:10:52.319 sys 0m1.192s 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.319 ************************************ 00:10:52.319 END TEST nvmf_filesystem_no_in_capsule 00:10:52.319 ************************************ 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.319 ************************************ 00:10:52.319 START TEST nvmf_filesystem_in_capsule 00:10:52.319 ************************************ 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=2941911 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 2941911 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2941911 ']' 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:52.319 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.319 [2024-11-29 21:40:24.485727] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:52.319 [2024-11-29 21:40:24.485773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.319 [2024-11-29 21:40:24.555767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.579 [2024-11-29 21:40:24.596019] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.579 [2024-11-29 21:40:24.596060] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.579 [2024-11-29 21:40:24.596069] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.579 [2024-11-29 21:40:24.596077] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.579 [2024-11-29 21:40:24.596085] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.579 [2024-11-29 21:40:24.596129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.579 [2024-11-29 21:40:24.596227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.579 [2024-11-29 21:40:24.596317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.579 [2024-11-29 21:40:24.596319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.579 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.579 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:52.579 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:52.579 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:52.579 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.579 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.579 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:52.579 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:10:52.579 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.579 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.579 [2024-11-29 21:40:24.769652] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c30f50/0x1c35400) succeed. 00:10:52.579 [2024-11-29 21:40:24.779975] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c32540/0x1c76aa0) succeed. 00:10:52.839 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.839 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:52.839 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.839 21:40:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.839 Malloc1 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.839 [2024-11-29 21:40:25.043517] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:52.839 { 00:10:52.839 "name": "Malloc1", 00:10:52.839 "aliases": [ 00:10:52.839 "be5edce2-5387-4941-81ec-03eb35b386e3" 00:10:52.839 ], 00:10:52.839 "product_name": "Malloc disk", 00:10:52.839 "block_size": 512, 00:10:52.839 "num_blocks": 1048576, 00:10:52.839 "uuid": "be5edce2-5387-4941-81ec-03eb35b386e3", 00:10:52.839 "assigned_rate_limits": { 00:10:52.839 "rw_ios_per_sec": 0, 00:10:52.839 "rw_mbytes_per_sec": 0, 00:10:52.839 "r_mbytes_per_sec": 0, 00:10:52.839 "w_mbytes_per_sec": 0 00:10:52.839 }, 00:10:52.839 "claimed": true, 00:10:52.839 "claim_type": "exclusive_write", 00:10:52.839 "zoned": false, 00:10:52.839 "supported_io_types": { 00:10:52.839 "read": true, 00:10:52.839 "write": true, 00:10:52.839 "unmap": true, 00:10:52.839 "flush": true, 00:10:52.839 "reset": true, 00:10:52.839 "nvme_admin": false, 00:10:52.839 "nvme_io": false, 00:10:52.839 "nvme_io_md": false, 00:10:52.839 "write_zeroes": true, 00:10:52.839 "zcopy": true, 00:10:52.839 "get_zone_info": false, 00:10:52.839 "zone_management": false, 00:10:52.839 "zone_append": false, 00:10:52.839 "compare": false, 00:10:52.839 "compare_and_write": false, 00:10:52.839 "abort": true, 00:10:52.839 "seek_hole": false, 00:10:52.839 "seek_data": false, 00:10:52.839 "copy": true, 00:10:52.839 "nvme_iov_md": false 00:10:52.839 }, 00:10:52.839 "memory_domains": [ 00:10:52.839 { 00:10:52.839 "dma_device_id": "system", 00:10:52.839 "dma_device_type": 1 00:10:52.839 }, 00:10:52.839 { 00:10:52.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.839 "dma_device_type": 2 00:10:52.839 } 00:10:52.839 ], 00:10:52.839 "driver_specific": {} 00:10:52.839 } 00:10:52.839 ]' 00:10:52.839 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:53.097 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:53.097 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:53.097 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:53.097 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:53.098 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:53.098 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:53.098 21:40:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:54.033 21:40:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:54.033 21:40:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:54.033 21:40:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.033 21:40:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:54.033 21:40:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:55.939 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:56.199 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:56.199 21:40:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:57.137 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:57.137 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:57.137 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:57.137 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.137 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.137 ************************************ 00:10:57.137 START TEST filesystem_in_capsule_ext4 00:10:57.137 ************************************ 00:10:57.137 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:57.397 mke2fs 1.47.0 (5-Feb-2023) 00:10:57.397 Discarding device blocks: 0/522240 done 00:10:57.397 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:57.397 Filesystem UUID: 34ae410a-114f-40f9-8839-266fba7931f5 00:10:57.397 Superblock backups stored on blocks: 00:10:57.397 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:57.397 00:10:57.397 Allocating group tables: 0/64 done 00:10:57.397 Writing inode tables: 0/64 done 00:10:57.397 Creating journal (8192 blocks): done 00:10:57.397 Writing superblocks and filesystem accounting information: 0/64 done 00:10:57.397 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2941911 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:57.397 00:10:57.397 real 0m0.207s 00:10:57.397 user 0m0.034s 00:10:57.397 sys 0m0.072s 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:57.397 ************************************ 00:10:57.397 END TEST filesystem_in_capsule_ext4 00:10:57.397 ************************************ 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.397 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.657 ************************************ 00:10:57.657 START TEST filesystem_in_capsule_btrfs 00:10:57.657 ************************************ 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:57.657 btrfs-progs v6.8.1 00:10:57.657 See https://btrfs.readthedocs.io for more information. 00:10:57.657 00:10:57.657 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:57.657 NOTE: several default settings have changed in version 5.15, please make sure 00:10:57.657 this does not affect your deployments: 00:10:57.657 - DUP for metadata (-m dup) 00:10:57.657 - enabled no-holes (-O no-holes) 00:10:57.657 - enabled free-space-tree (-R free-space-tree) 00:10:57.657 00:10:57.657 Label: (null) 00:10:57.657 UUID: 288e53d5-415b-426e-8e2d-f3d9af230ae5 00:10:57.657 Node size: 16384 00:10:57.657 Sector size: 4096 (CPU page size: 4096) 00:10:57.657 Filesystem size: 510.00MiB 00:10:57.657 Block group profiles: 00:10:57.657 Data: single 8.00MiB 00:10:57.657 Metadata: DUP 32.00MiB 00:10:57.657 System: DUP 8.00MiB 00:10:57.657 SSD detected: yes 00:10:57.657 Zoned device: no 00:10:57.657 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:57.657 Checksum: crc32c 00:10:57.657 Number of devices: 1 00:10:57.657 Devices: 00:10:57.657 ID SIZE PATH 00:10:57.657 1 510.00MiB /dev/nvme0n1p1 00:10:57.657 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2941911 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:57.657 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:57.916 00:10:57.916 real 0m0.237s 00:10:57.916 user 0m0.035s 00:10:57.916 sys 0m0.111s 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:57.916 ************************************ 00:10:57.916 END TEST filesystem_in_capsule_btrfs 00:10:57.916 ************************************ 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.916 ************************************ 00:10:57.916 START TEST filesystem_in_capsule_xfs 00:10:57.916 ************************************ 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:57.916 21:40:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:57.916 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:57.916 = sectsz=512 attr=2, projid32bit=1 00:10:57.916 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:57.917 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:57.917 data = bsize=4096 blocks=130560, imaxpct=25 00:10:57.917 = sunit=0 swidth=0 blks 00:10:57.917 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:57.917 log =internal log bsize=4096 blocks=16384, version=2 00:10:57.917 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:57.917 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:57.917 Discarding blocks...Done. 00:10:57.917 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:57.917 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:57.917 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:57.917 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:57.917 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:57.917 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:57.917 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:57.917 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:58.175 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2941911 00:10:58.175 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:58.175 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:58.175 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:58.175 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:58.175 00:10:58.175 real 0m0.215s 00:10:58.175 user 0m0.039s 00:10:58.175 sys 0m0.072s 00:10:58.175 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.175 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:58.175 ************************************ 00:10:58.175 END TEST filesystem_in_capsule_xfs 00:10:58.175 ************************************ 00:10:58.175 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:58.175 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:58.175 21:40:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.110 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.110 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:59.110 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:59.110 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.110 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:59.110 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.110 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:59.110 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2941911 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2941911 ']' 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2941911 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2941911 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2941911' 00:10:59.111 killing process with pid 2941911 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2941911 00:10:59.111 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2941911 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:59.679 00:10:59.679 real 0m7.334s 00:10:59.679 user 0m28.487s 00:10:59.679 sys 0m1.205s 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.679 ************************************ 00:10:59.679 END TEST nvmf_filesystem_in_capsule 00:10:59.679 ************************************ 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:59.679 rmmod nvme_rdma 00:10:59.679 rmmod nvme_fabrics 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:10:59.679 00:10:59.679 real 0m22.154s 00:10:59.679 user 0m59.258s 00:10:59.679 sys 0m7.860s 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.679 ************************************ 00:10:59.679 END TEST nvmf_filesystem 00:10:59.679 ************************************ 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.679 21:40:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.940 ************************************ 00:10:59.940 START TEST nvmf_target_discovery 00:10:59.940 ************************************ 00:10:59.940 21:40:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:10:59.940 * Looking for test storage... 00:10:59.940 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.940 --rc genhtml_branch_coverage=1 00:10:59.940 --rc genhtml_function_coverage=1 00:10:59.940 --rc genhtml_legend=1 00:10:59.940 --rc geninfo_all_blocks=1 00:10:59.940 --rc geninfo_unexecuted_blocks=1 00:10:59.940 00:10:59.940 ' 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.940 --rc genhtml_branch_coverage=1 00:10:59.940 --rc genhtml_function_coverage=1 00:10:59.940 --rc genhtml_legend=1 00:10:59.940 --rc geninfo_all_blocks=1 00:10:59.940 --rc geninfo_unexecuted_blocks=1 00:10:59.940 00:10:59.940 ' 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.940 --rc genhtml_branch_coverage=1 00:10:59.940 --rc genhtml_function_coverage=1 00:10:59.940 --rc genhtml_legend=1 00:10:59.940 --rc geninfo_all_blocks=1 00:10:59.940 --rc geninfo_unexecuted_blocks=1 00:10:59.940 00:10:59.940 ' 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.940 --rc genhtml_branch_coverage=1 00:10:59.940 --rc genhtml_function_coverage=1 00:10:59.940 --rc genhtml_legend=1 00:10:59.940 --rc geninfo_all_blocks=1 00:10:59.940 --rc geninfo_unexecuted_blocks=1 00:10:59.940 00:10:59.940 ' 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.940 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.941 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.941 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.200 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:00.200 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:00.200 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.200 21:40:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:06.777 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:06.777 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:06.777 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:06.777 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # rdma_device_init 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:06.777 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@526 -- # allocate_nic_ips 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:06.778 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:06.778 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:06.778 altname enp217s0f0np0 00:11:06.778 altname ens818f0np0 00:11:06.778 inet 192.168.100.8/24 scope global mlx_0_0 00:11:06.778 valid_lft forever preferred_lft forever 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:06.778 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:06.778 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:06.778 altname enp217s0f1np1 00:11:06.778 altname ens818f1np1 00:11:06.778 inet 192.168.100.9/24 scope global mlx_0_1 00:11:06.778 valid_lft forever preferred_lft forever 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:11:06.778 192.168.100.9' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # head -n 1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:11:06.778 192.168.100.9' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:11:06.778 192.168.100.9' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # tail -n +2 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # head -n 1 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=2946610 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 2946610 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2946610 ']' 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:06.778 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.779 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:06.779 21:40:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:06.779 [2024-11-29 21:40:38.792771] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:06.779 [2024-11-29 21:40:38.792827] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.779 [2024-11-29 21:40:38.865944] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.779 [2024-11-29 21:40:38.906287] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.779 [2024-11-29 21:40:38.906329] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.779 [2024-11-29 21:40:38.906339] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.779 [2024-11-29 21:40:38.906347] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.779 [2024-11-29 21:40:38.906354] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.779 [2024-11-29 21:40:38.906403] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.779 [2024-11-29 21:40:38.906424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.779 [2024-11-29 21:40:38.906494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.779 [2024-11-29 21:40:38.906496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.779 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.779 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:06.779 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:06.779 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.039 [2024-11-29 21:40:39.095638] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xad9f50/0xade400) succeed. 00:11:07.039 [2024-11-29 21:40:39.105905] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xadb540/0xb1faa0) succeed. 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.039 Null1 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.039 [2024-11-29 21:40:39.268524] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.039 Null2 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.039 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.299 Null3 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:07.299 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.300 Null4 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:11:07.300 00:11:07.300 Discovery Log Number of Records 6, Generation counter 6 00:11:07.300 =====Discovery Log Entry 0====== 00:11:07.300 trtype: rdma 00:11:07.300 adrfam: ipv4 00:11:07.300 subtype: current discovery subsystem 00:11:07.300 treq: not required 00:11:07.300 portid: 0 00:11:07.300 trsvcid: 4420 00:11:07.300 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:07.300 traddr: 192.168.100.8 00:11:07.300 eflags: explicit discovery connections, duplicate discovery information 00:11:07.300 rdma_prtype: not specified 00:11:07.300 rdma_qptype: connected 00:11:07.300 rdma_cms: rdma-cm 00:11:07.300 rdma_pkey: 0x0000 00:11:07.300 =====Discovery Log Entry 1====== 00:11:07.300 trtype: rdma 00:11:07.300 adrfam: ipv4 00:11:07.300 subtype: nvme subsystem 00:11:07.300 treq: not required 00:11:07.300 portid: 0 00:11:07.300 trsvcid: 4420 00:11:07.300 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:07.300 traddr: 192.168.100.8 00:11:07.300 eflags: none 00:11:07.300 rdma_prtype: not specified 00:11:07.300 rdma_qptype: connected 00:11:07.300 rdma_cms: rdma-cm 00:11:07.300 rdma_pkey: 0x0000 00:11:07.300 =====Discovery Log Entry 2====== 00:11:07.300 trtype: rdma 00:11:07.300 adrfam: ipv4 00:11:07.300 subtype: nvme subsystem 00:11:07.300 treq: not required 00:11:07.300 portid: 0 00:11:07.300 trsvcid: 4420 00:11:07.300 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:07.300 traddr: 192.168.100.8 00:11:07.300 eflags: none 00:11:07.300 rdma_prtype: not specified 00:11:07.300 rdma_qptype: connected 00:11:07.300 rdma_cms: rdma-cm 00:11:07.300 rdma_pkey: 0x0000 00:11:07.300 =====Discovery Log Entry 3====== 00:11:07.300 trtype: rdma 00:11:07.300 adrfam: ipv4 00:11:07.300 subtype: nvme subsystem 00:11:07.300 treq: not required 00:11:07.300 portid: 0 00:11:07.300 trsvcid: 4420 00:11:07.300 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:07.300 traddr: 192.168.100.8 00:11:07.300 eflags: none 00:11:07.300 rdma_prtype: not specified 00:11:07.300 rdma_qptype: connected 00:11:07.300 rdma_cms: rdma-cm 00:11:07.300 rdma_pkey: 0x0000 00:11:07.300 =====Discovery Log Entry 4====== 00:11:07.300 trtype: rdma 00:11:07.300 adrfam: ipv4 00:11:07.300 subtype: nvme subsystem 00:11:07.300 treq: not required 00:11:07.300 portid: 0 00:11:07.300 trsvcid: 4420 00:11:07.300 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:07.300 traddr: 192.168.100.8 00:11:07.300 eflags: none 00:11:07.300 rdma_prtype: not specified 00:11:07.300 rdma_qptype: connected 00:11:07.300 rdma_cms: rdma-cm 00:11:07.300 rdma_pkey: 0x0000 00:11:07.300 =====Discovery Log Entry 5====== 00:11:07.300 trtype: rdma 00:11:07.300 adrfam: ipv4 00:11:07.300 subtype: discovery subsystem referral 00:11:07.300 treq: not required 00:11:07.300 portid: 0 00:11:07.300 trsvcid: 4430 00:11:07.300 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:07.300 traddr: 192.168.100.8 00:11:07.300 eflags: none 00:11:07.300 rdma_prtype: unrecognized 00:11:07.300 rdma_qptype: unrecognized 00:11:07.300 rdma_cms: unrecognized 00:11:07.300 rdma_pkey: 0x0000 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:07.300 Perform nvmf subsystem discovery via RPC 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.300 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.300 [ 00:11:07.300 { 00:11:07.300 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:07.300 "subtype": "Discovery", 00:11:07.300 "listen_addresses": [ 00:11:07.300 { 00:11:07.300 "trtype": "RDMA", 00:11:07.300 "adrfam": "IPv4", 00:11:07.300 "traddr": "192.168.100.8", 00:11:07.300 "trsvcid": "4420" 00:11:07.300 } 00:11:07.300 ], 00:11:07.300 "allow_any_host": true, 00:11:07.300 "hosts": [] 00:11:07.300 }, 00:11:07.300 { 00:11:07.300 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:07.300 "subtype": "NVMe", 00:11:07.300 "listen_addresses": [ 00:11:07.300 { 00:11:07.300 "trtype": "RDMA", 00:11:07.300 "adrfam": "IPv4", 00:11:07.300 "traddr": "192.168.100.8", 00:11:07.300 "trsvcid": "4420" 00:11:07.300 } 00:11:07.300 ], 00:11:07.300 "allow_any_host": true, 00:11:07.300 "hosts": [], 00:11:07.300 "serial_number": "SPDK00000000000001", 00:11:07.300 "model_number": "SPDK bdev Controller", 00:11:07.300 "max_namespaces": 32, 00:11:07.300 "min_cntlid": 1, 00:11:07.300 "max_cntlid": 65519, 00:11:07.300 "namespaces": [ 00:11:07.300 { 00:11:07.300 "nsid": 1, 00:11:07.300 "bdev_name": "Null1", 00:11:07.300 "name": "Null1", 00:11:07.300 "nguid": "3B928F68FC3341EDB641BC0FA197827F", 00:11:07.300 "uuid": "3b928f68-fc33-41ed-b641-bc0fa197827f" 00:11:07.300 } 00:11:07.300 ] 00:11:07.300 }, 00:11:07.300 { 00:11:07.300 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:07.300 "subtype": "NVMe", 00:11:07.300 "listen_addresses": [ 00:11:07.300 { 00:11:07.300 "trtype": "RDMA", 00:11:07.300 "adrfam": "IPv4", 00:11:07.300 "traddr": "192.168.100.8", 00:11:07.300 "trsvcid": "4420" 00:11:07.300 } 00:11:07.300 ], 00:11:07.300 "allow_any_host": true, 00:11:07.300 "hosts": [], 00:11:07.300 "serial_number": "SPDK00000000000002", 00:11:07.300 "model_number": "SPDK bdev Controller", 00:11:07.300 "max_namespaces": 32, 00:11:07.300 "min_cntlid": 1, 00:11:07.300 "max_cntlid": 65519, 00:11:07.300 "namespaces": [ 00:11:07.300 { 00:11:07.300 "nsid": 1, 00:11:07.300 "bdev_name": "Null2", 00:11:07.300 "name": "Null2", 00:11:07.300 "nguid": "6CCE6752060840CC9ECCC9650DB7668C", 00:11:07.300 "uuid": "6cce6752-0608-40cc-9ecc-c9650db7668c" 00:11:07.300 } 00:11:07.300 ] 00:11:07.300 }, 00:11:07.300 { 00:11:07.300 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:07.300 "subtype": "NVMe", 00:11:07.301 "listen_addresses": [ 00:11:07.301 { 00:11:07.301 "trtype": "RDMA", 00:11:07.301 "adrfam": "IPv4", 00:11:07.301 "traddr": "192.168.100.8", 00:11:07.301 "trsvcid": "4420" 00:11:07.301 } 00:11:07.301 ], 00:11:07.301 "allow_any_host": true, 00:11:07.301 "hosts": [], 00:11:07.301 "serial_number": "SPDK00000000000003", 00:11:07.301 "model_number": "SPDK bdev Controller", 00:11:07.301 "max_namespaces": 32, 00:11:07.301 "min_cntlid": 1, 00:11:07.301 "max_cntlid": 65519, 00:11:07.301 "namespaces": [ 00:11:07.301 { 00:11:07.301 "nsid": 1, 00:11:07.301 "bdev_name": "Null3", 00:11:07.301 "name": "Null3", 00:11:07.301 "nguid": "7D59281382384C09A230ACF8D97DB4AB", 00:11:07.301 "uuid": "7d592813-8238-4c09-a230-acf8d97db4ab" 00:11:07.301 } 00:11:07.301 ] 00:11:07.301 }, 00:11:07.301 { 00:11:07.301 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:07.301 "subtype": "NVMe", 00:11:07.301 "listen_addresses": [ 00:11:07.301 { 00:11:07.301 "trtype": "RDMA", 00:11:07.301 "adrfam": "IPv4", 00:11:07.301 "traddr": "192.168.100.8", 00:11:07.301 "trsvcid": "4420" 00:11:07.301 } 00:11:07.301 ], 00:11:07.301 "allow_any_host": true, 00:11:07.301 "hosts": [], 00:11:07.301 "serial_number": "SPDK00000000000004", 00:11:07.301 "model_number": "SPDK bdev Controller", 00:11:07.301 "max_namespaces": 32, 00:11:07.301 "min_cntlid": 1, 00:11:07.301 "max_cntlid": 65519, 00:11:07.301 "namespaces": [ 00:11:07.301 { 00:11:07.301 "nsid": 1, 00:11:07.301 "bdev_name": "Null4", 00:11:07.301 "name": "Null4", 00:11:07.301 "nguid": "C71BCDFF014B4CDEAF4B709BC2B77018", 00:11:07.301 "uuid": "c71bcdff-014b-4cde-af4b-709bc2b77018" 00:11:07.301 } 00:11:07.301 ] 00:11:07.301 } 00:11:07.301 ] 00:11:07.301 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.301 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:07.301 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:07.301 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.301 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.301 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.301 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.301 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:07.301 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.301 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.560 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.560 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:07.560 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:07.560 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.560 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.560 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.560 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:07.560 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.560 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:07.561 rmmod nvme_rdma 00:11:07.561 rmmod nvme_fabrics 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 2946610 ']' 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 2946610 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2946610 ']' 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2946610 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2946610 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2946610' 00:11:07.561 killing process with pid 2946610 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2946610 00:11:07.561 21:40:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2946610 00:11:07.821 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:07.821 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:11:07.821 00:11:07.821 real 0m8.084s 00:11:07.821 user 0m6.311s 00:11:07.821 sys 0m5.502s 00:11:07.821 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.821 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:07.821 ************************************ 00:11:07.821 END TEST nvmf_target_discovery 00:11:07.821 ************************************ 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:08.080 ************************************ 00:11:08.080 START TEST nvmf_referrals 00:11:08.080 ************************************ 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:08.080 * Looking for test storage... 00:11:08.080 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.080 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:08.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.081 --rc genhtml_branch_coverage=1 00:11:08.081 --rc genhtml_function_coverage=1 00:11:08.081 --rc genhtml_legend=1 00:11:08.081 --rc geninfo_all_blocks=1 00:11:08.081 --rc geninfo_unexecuted_blocks=1 00:11:08.081 00:11:08.081 ' 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:08.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.081 --rc genhtml_branch_coverage=1 00:11:08.081 --rc genhtml_function_coverage=1 00:11:08.081 --rc genhtml_legend=1 00:11:08.081 --rc geninfo_all_blocks=1 00:11:08.081 --rc geninfo_unexecuted_blocks=1 00:11:08.081 00:11:08.081 ' 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:08.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.081 --rc genhtml_branch_coverage=1 00:11:08.081 --rc genhtml_function_coverage=1 00:11:08.081 --rc genhtml_legend=1 00:11:08.081 --rc geninfo_all_blocks=1 00:11:08.081 --rc geninfo_unexecuted_blocks=1 00:11:08.081 00:11:08.081 ' 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:08.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.081 --rc genhtml_branch_coverage=1 00:11:08.081 --rc genhtml_function_coverage=1 00:11:08.081 --rc genhtml_legend=1 00:11:08.081 --rc geninfo_all_blocks=1 00:11:08.081 --rc geninfo_unexecuted_blocks=1 00:11:08.081 00:11:08.081 ' 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.081 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.342 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.342 21:40:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:14.910 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:14.910 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:14.910 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:14.910 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # rdma_device_init 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@526 -- # allocate_nic_ips 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:14.910 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:14.911 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:14.911 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:14.911 altname enp217s0f0np0 00:11:14.911 altname ens818f0np0 00:11:14.911 inet 192.168.100.8/24 scope global mlx_0_0 00:11:14.911 valid_lft forever preferred_lft forever 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:14.911 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:14.911 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:14.911 altname enp217s0f1np1 00:11:14.911 altname ens818f1np1 00:11:14.911 inet 192.168.100.9/24 scope global mlx_0_1 00:11:14.911 valid_lft forever preferred_lft forever 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:14.911 21:40:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:11:14.911 192.168.100.9' 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # head -n 1 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:11:14.911 192.168.100.9' 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:11:14.911 192.168.100.9' 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # tail -n +2 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # head -n 1 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=2950081 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 2950081 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2950081 ']' 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.911 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.911 [2024-11-29 21:40:47.147246] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:14.911 [2024-11-29 21:40:47.147301] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.171 [2024-11-29 21:40:47.218549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.171 [2024-11-29 21:40:47.258107] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.171 [2024-11-29 21:40:47.258152] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.171 [2024-11-29 21:40:47.258161] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.171 [2024-11-29 21:40:47.258169] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.171 [2024-11-29 21:40:47.258176] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.171 [2024-11-29 21:40:47.258290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.171 [2024-11-29 21:40:47.258403] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.171 [2024-11-29 21:40:47.258469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.171 [2024-11-29 21:40:47.258470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.171 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.171 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:15.171 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:15.171 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.171 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.171 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.171 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:15.171 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.171 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.430 [2024-11-29 21:40:47.442270] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x114ef50/0x1153400) succeed. 00:11:15.430 [2024-11-29 21:40:47.452602] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1150540/0x1194aa0) succeed. 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.430 [2024-11-29 21:40:47.575662] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.430 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:15.689 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:15.690 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:15.690 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:15.949 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:15.949 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:15.949 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:11:15.949 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.949 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.949 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.949 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:15.949 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.949 21:40:47 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:15.949 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:16.208 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:16.208 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:16.209 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:16.468 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:16.468 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:16.468 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:16.468 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:16.468 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:16.468 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:16.468 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:16.468 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:16.468 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:16.468 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:16.468 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:16.468 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:16.468 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.727 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:16.727 rmmod nvme_rdma 00:11:16.727 rmmod nvme_fabrics 00:11:16.987 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.987 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:16.987 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:16.987 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 2950081 ']' 00:11:16.987 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 2950081 00:11:16.987 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2950081 ']' 00:11:16.987 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2950081 00:11:16.987 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:16.987 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:16.987 21:40:48 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2950081 00:11:16.987 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:16.987 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:16.987 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2950081' 00:11:16.987 killing process with pid 2950081 00:11:16.987 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2950081 00:11:16.987 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2950081 00:11:17.247 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:17.247 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:11:17.247 00:11:17.247 real 0m9.200s 00:11:17.247 user 0m10.874s 00:11:17.247 sys 0m5.992s 00:11:17.247 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.247 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.247 ************************************ 00:11:17.247 END TEST nvmf_referrals 00:11:17.247 ************************************ 00:11:17.247 21:40:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:17.247 21:40:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:17.247 21:40:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.247 21:40:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:17.247 ************************************ 00:11:17.247 START TEST nvmf_connect_disconnect 00:11:17.247 ************************************ 00:11:17.247 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:17.508 * Looking for test storage... 00:11:17.508 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:17.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.508 --rc genhtml_branch_coverage=1 00:11:17.508 --rc genhtml_function_coverage=1 00:11:17.508 --rc genhtml_legend=1 00:11:17.508 --rc geninfo_all_blocks=1 00:11:17.508 --rc geninfo_unexecuted_blocks=1 00:11:17.508 00:11:17.508 ' 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:17.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.508 --rc genhtml_branch_coverage=1 00:11:17.508 --rc genhtml_function_coverage=1 00:11:17.508 --rc genhtml_legend=1 00:11:17.508 --rc geninfo_all_blocks=1 00:11:17.508 --rc geninfo_unexecuted_blocks=1 00:11:17.508 00:11:17.508 ' 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:17.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.508 --rc genhtml_branch_coverage=1 00:11:17.508 --rc genhtml_function_coverage=1 00:11:17.508 --rc genhtml_legend=1 00:11:17.508 --rc geninfo_all_blocks=1 00:11:17.508 --rc geninfo_unexecuted_blocks=1 00:11:17.508 00:11:17.508 ' 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:17.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.508 --rc genhtml_branch_coverage=1 00:11:17.508 --rc genhtml_function_coverage=1 00:11:17.508 --rc genhtml_legend=1 00:11:17.508 --rc geninfo_all_blocks=1 00:11:17.508 --rc geninfo_unexecuted_blocks=1 00:11:17.508 00:11:17.508 ' 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.508 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.509 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.509 21:40:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:25.633 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:25.633 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:11:25.633 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:25.634 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:25.634 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # rdma_device_init 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@526 -- # allocate_nic_ips 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:25.634 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:25.634 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:25.634 altname enp217s0f0np0 00:11:25.634 altname ens818f0np0 00:11:25.634 inet 192.168.100.8/24 scope global mlx_0_0 00:11:25.634 valid_lft forever preferred_lft forever 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:25.634 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:25.634 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:25.634 altname enp217s0f1np1 00:11:25.634 altname ens818f1np1 00:11:25.634 inet 192.168.100.9/24 scope global mlx_0_1 00:11:25.634 valid_lft forever preferred_lft forever 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:11:25.634 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:11:25.635 192.168.100.9' 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:11:25.635 192.168.100.9' 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # head -n 1 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # tail -n +2 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:11:25.635 192.168.100.9' 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # head -n 1 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=2953955 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 2953955 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2953955 ']' 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:25.635 21:40:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:25.635 [2024-11-29 21:40:56.855386] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:25.635 [2024-11-29 21:40:56.855442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.635 [2024-11-29 21:40:56.931380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.635 [2024-11-29 21:40:56.971460] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.635 [2024-11-29 21:40:56.971505] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.635 [2024-11-29 21:40:56.971514] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.635 [2024-11-29 21:40:56.971522] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.635 [2024-11-29 21:40:56.971529] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.635 [2024-11-29 21:40:56.971582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.635 [2024-11-29 21:40:56.971603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.635 [2024-11-29 21:40:56.971693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.635 [2024-11-29 21:40:56.971695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.635 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:25.635 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:25.635 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:25.635 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:25.635 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:25.635 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:25.636 [2024-11-29 21:40:57.141804] rdma.c:2737:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:25.636 [2024-11-29 21:40:57.164737] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1116f50/0x111b400) succeed. 00:11:25.636 [2024-11-29 21:40:57.175359] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1118540/0x115caa0) succeed. 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:25.636 [2024-11-29 21:40:57.315233] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:25.636 21:40:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:28.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.830 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:40.830 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:40.830 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:40.830 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:40.830 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:40.831 rmmod nvme_rdma 00:16:40.831 rmmod nvme_fabrics 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 2953955 ']' 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 2953955 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2953955 ']' 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2953955 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2953955 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2953955' 00:16:40.831 killing process with pid 2953955 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2953955 00:16:40.831 21:46:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2953955 00:16:40.831 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:40.831 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:16:40.831 00:16:40.831 real 5m23.650s 00:16:40.831 user 21m1.548s 00:16:40.831 sys 0m18.383s 00:16:40.831 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:40.831 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:40.831 ************************************ 00:16:40.831 END TEST nvmf_connect_disconnect 00:16:40.831 ************************************ 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:41.091 ************************************ 00:16:41.091 START TEST nvmf_multitarget 00:16:41.091 ************************************ 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:16:41.091 * Looking for test storage... 00:16:41.091 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:41.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.091 --rc genhtml_branch_coverage=1 00:16:41.091 --rc genhtml_function_coverage=1 00:16:41.091 --rc genhtml_legend=1 00:16:41.091 --rc geninfo_all_blocks=1 00:16:41.091 --rc geninfo_unexecuted_blocks=1 00:16:41.091 00:16:41.091 ' 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:41.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.091 --rc genhtml_branch_coverage=1 00:16:41.091 --rc genhtml_function_coverage=1 00:16:41.091 --rc genhtml_legend=1 00:16:41.091 --rc geninfo_all_blocks=1 00:16:41.091 --rc geninfo_unexecuted_blocks=1 00:16:41.091 00:16:41.091 ' 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:41.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.091 --rc genhtml_branch_coverage=1 00:16:41.091 --rc genhtml_function_coverage=1 00:16:41.091 --rc genhtml_legend=1 00:16:41.091 --rc geninfo_all_blocks=1 00:16:41.091 --rc geninfo_unexecuted_blocks=1 00:16:41.091 00:16:41.091 ' 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:41.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.091 --rc genhtml_branch_coverage=1 00:16:41.091 --rc genhtml_function_coverage=1 00:16:41.091 --rc genhtml_legend=1 00:16:41.091 --rc geninfo_all_blocks=1 00:16:41.091 --rc geninfo_unexecuted_blocks=1 00:16:41.091 00:16:41.091 ' 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.091 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:41.351 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:41.351 21:46:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:47.924 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:47.924 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:47.924 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:47.924 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:16:47.924 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # rdma_device_init 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@526 -- # allocate_nic_ips 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:47.925 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:47.925 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:47.925 altname enp217s0f0np0 00:16:47.925 altname ens818f0np0 00:16:47.925 inet 192.168.100.8/24 scope global mlx_0_0 00:16:47.925 valid_lft forever preferred_lft forever 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:47.925 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:47.925 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:47.925 altname enp217s0f1np1 00:16:47.925 altname ens818f1np1 00:16:47.925 inet 192.168.100.9/24 scope global mlx_0_1 00:16:47.925 valid_lft forever preferred_lft forever 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:16:47.925 192.168.100.9' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:16:47.925 192.168.100.9' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # head -n 1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:16:47.925 192.168.100.9' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # tail -n +2 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # head -n 1 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:47.925 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=3013291 00:16:47.926 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 3013291 00:16:47.926 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:47.926 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3013291 ']' 00:16:47.926 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.926 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.926 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.926 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.926 21:46:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:47.926 [2024-11-29 21:46:19.987185] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:47.926 [2024-11-29 21:46:19.987237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.926 [2024-11-29 21:46:20.059267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:47.926 [2024-11-29 21:46:20.101154] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.926 [2024-11-29 21:46:20.101199] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.926 [2024-11-29 21:46:20.101210] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.926 [2024-11-29 21:46:20.101220] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.926 [2024-11-29 21:46:20.101228] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.926 [2024-11-29 21:46:20.101277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.926 [2024-11-29 21:46:20.101409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.926 [2024-11-29 21:46:20.101502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.926 [2024-11-29 21:46:20.101504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.186 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.186 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:48.186 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:48.186 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:48.186 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:48.186 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.186 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:48.186 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:48.186 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:48.186 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:48.186 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:48.445 "nvmf_tgt_1" 00:16:48.445 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:48.445 "nvmf_tgt_2" 00:16:48.445 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:48.445 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:48.704 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:48.704 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:48.704 true 00:16:48.704 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:48.704 true 00:16:48.704 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:48.704 21:46:20 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:48.964 rmmod nvme_rdma 00:16:48.964 rmmod nvme_fabrics 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 3013291 ']' 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 3013291 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3013291 ']' 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3013291 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3013291 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3013291' 00:16:48.964 killing process with pid 3013291 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3013291 00:16:48.964 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3013291 00:16:49.223 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:49.223 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:16:49.223 00:16:49.223 real 0m8.177s 00:16:49.223 user 0m7.385s 00:16:49.223 sys 0m5.534s 00:16:49.223 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.223 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:49.223 ************************************ 00:16:49.223 END TEST nvmf_multitarget 00:16:49.223 ************************************ 00:16:49.223 21:46:21 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:16:49.223 21:46:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:49.223 21:46:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.223 21:46:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:49.223 ************************************ 00:16:49.223 START TEST nvmf_rpc 00:16:49.223 ************************************ 00:16:49.223 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:16:49.483 * Looking for test storage... 00:16:49.483 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:49.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.483 --rc genhtml_branch_coverage=1 00:16:49.483 --rc genhtml_function_coverage=1 00:16:49.483 --rc genhtml_legend=1 00:16:49.483 --rc geninfo_all_blocks=1 00:16:49.483 --rc geninfo_unexecuted_blocks=1 00:16:49.483 00:16:49.483 ' 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:49.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.483 --rc genhtml_branch_coverage=1 00:16:49.483 --rc genhtml_function_coverage=1 00:16:49.483 --rc genhtml_legend=1 00:16:49.483 --rc geninfo_all_blocks=1 00:16:49.483 --rc geninfo_unexecuted_blocks=1 00:16:49.483 00:16:49.483 ' 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:49.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.483 --rc genhtml_branch_coverage=1 00:16:49.483 --rc genhtml_function_coverage=1 00:16:49.483 --rc genhtml_legend=1 00:16:49.483 --rc geninfo_all_blocks=1 00:16:49.483 --rc geninfo_unexecuted_blocks=1 00:16:49.483 00:16:49.483 ' 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:49.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.483 --rc genhtml_branch_coverage=1 00:16:49.483 --rc genhtml_function_coverage=1 00:16:49.483 --rc genhtml_legend=1 00:16:49.483 --rc geninfo_all_blocks=1 00:16:49.483 --rc geninfo_unexecuted_blocks=1 00:16:49.483 00:16:49.483 ' 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.483 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:49.484 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:49.484 21:46:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:56.061 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:56.061 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:56.062 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:56.062 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:56.062 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # rdma_device_init 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@526 -- # allocate_nic_ips 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:56.062 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:56.062 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:56.062 altname enp217s0f0np0 00:16:56.062 altname ens818f0np0 00:16:56.062 inet 192.168.100.8/24 scope global mlx_0_0 00:16:56.062 valid_lft forever preferred_lft forever 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:56.062 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:56.062 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:56.062 altname enp217s0f1np1 00:16:56.062 altname ens818f1np1 00:16:56.062 inet 192.168.100.9/24 scope global mlx_0_1 00:16:56.062 valid_lft forever preferred_lft forever 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:16:56.062 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:16:56.063 192.168.100.9' 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:16:56.063 192.168.100.9' 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # head -n 1 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:16:56.063 192.168.100.9' 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # tail -n +2 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # head -n 1 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=3016799 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 3016799 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3016799 ']' 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:56.063 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.322 [2024-11-29 21:46:28.318488] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:56.322 [2024-11-29 21:46:28.318541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.322 [2024-11-29 21:46:28.388462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.322 [2024-11-29 21:46:28.428284] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.322 [2024-11-29 21:46:28.428327] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.322 [2024-11-29 21:46:28.428338] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.322 [2024-11-29 21:46:28.428346] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.322 [2024-11-29 21:46:28.428353] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.322 [2024-11-29 21:46:28.428397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.322 [2024-11-29 21:46:28.428494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.322 [2024-11-29 21:46:28.428585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.322 [2024-11-29 21:46:28.428587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.322 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:56.322 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:56.322 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:56.322 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:56.322 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:56.582 "tick_rate": 2500000000, 00:16:56.582 "poll_groups": [ 00:16:56.582 { 00:16:56.582 "name": "nvmf_tgt_poll_group_000", 00:16:56.582 "admin_qpairs": 0, 00:16:56.582 "io_qpairs": 0, 00:16:56.582 "current_admin_qpairs": 0, 00:16:56.582 "current_io_qpairs": 0, 00:16:56.582 "pending_bdev_io": 0, 00:16:56.582 "completed_nvme_io": 0, 00:16:56.582 "transports": [] 00:16:56.582 }, 00:16:56.582 { 00:16:56.582 "name": "nvmf_tgt_poll_group_001", 00:16:56.582 "admin_qpairs": 0, 00:16:56.582 "io_qpairs": 0, 00:16:56.582 "current_admin_qpairs": 0, 00:16:56.582 "current_io_qpairs": 0, 00:16:56.582 "pending_bdev_io": 0, 00:16:56.582 "completed_nvme_io": 0, 00:16:56.582 "transports": [] 00:16:56.582 }, 00:16:56.582 { 00:16:56.582 "name": "nvmf_tgt_poll_group_002", 00:16:56.582 "admin_qpairs": 0, 00:16:56.582 "io_qpairs": 0, 00:16:56.582 "current_admin_qpairs": 0, 00:16:56.582 "current_io_qpairs": 0, 00:16:56.582 "pending_bdev_io": 0, 00:16:56.582 "completed_nvme_io": 0, 00:16:56.582 "transports": [] 00:16:56.582 }, 00:16:56.582 { 00:16:56.582 "name": "nvmf_tgt_poll_group_003", 00:16:56.582 "admin_qpairs": 0, 00:16:56.582 "io_qpairs": 0, 00:16:56.582 "current_admin_qpairs": 0, 00:16:56.582 "current_io_qpairs": 0, 00:16:56.582 "pending_bdev_io": 0, 00:16:56.582 "completed_nvme_io": 0, 00:16:56.582 "transports": [] 00:16:56.582 } 00:16:56.582 ] 00:16:56.582 }' 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.582 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.582 [2024-11-29 21:46:28.715563] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x169afb0/0x169f460) succeed. 00:16:56.582 [2024-11-29 21:46:28.726653] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x169c5a0/0x16e0b00) succeed. 00:16:56.841 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.841 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:56.841 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.841 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.841 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.841 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:56.841 "tick_rate": 2500000000, 00:16:56.841 "poll_groups": [ 00:16:56.841 { 00:16:56.841 "name": "nvmf_tgt_poll_group_000", 00:16:56.841 "admin_qpairs": 0, 00:16:56.841 "io_qpairs": 0, 00:16:56.841 "current_admin_qpairs": 0, 00:16:56.841 "current_io_qpairs": 0, 00:16:56.841 "pending_bdev_io": 0, 00:16:56.841 "completed_nvme_io": 0, 00:16:56.841 "transports": [ 00:16:56.841 { 00:16:56.841 "trtype": "RDMA", 00:16:56.841 "pending_data_buffer": 0, 00:16:56.841 "devices": [ 00:16:56.841 { 00:16:56.841 "name": "mlx5_0", 00:16:56.841 "polls": 15902, 00:16:56.841 "idle_polls": 15902, 00:16:56.841 "completions": 0, 00:16:56.841 "requests": 0, 00:16:56.841 "request_latency": 0, 00:16:56.841 "pending_free_request": 0, 00:16:56.841 "pending_rdma_read": 0, 00:16:56.841 "pending_rdma_write": 0, 00:16:56.841 "pending_rdma_send": 0, 00:16:56.841 "total_send_wrs": 0, 00:16:56.841 "send_doorbell_updates": 0, 00:16:56.841 "total_recv_wrs": 4096, 00:16:56.841 "recv_doorbell_updates": 1 00:16:56.841 }, 00:16:56.841 { 00:16:56.841 "name": "mlx5_1", 00:16:56.841 "polls": 15902, 00:16:56.841 "idle_polls": 15902, 00:16:56.841 "completions": 0, 00:16:56.841 "requests": 0, 00:16:56.841 "request_latency": 0, 00:16:56.841 "pending_free_request": 0, 00:16:56.841 "pending_rdma_read": 0, 00:16:56.842 "pending_rdma_write": 0, 00:16:56.842 "pending_rdma_send": 0, 00:16:56.842 "total_send_wrs": 0, 00:16:56.842 "send_doorbell_updates": 0, 00:16:56.842 "total_recv_wrs": 4096, 00:16:56.842 "recv_doorbell_updates": 1 00:16:56.842 } 00:16:56.842 ] 00:16:56.842 } 00:16:56.842 ] 00:16:56.842 }, 00:16:56.842 { 00:16:56.842 "name": "nvmf_tgt_poll_group_001", 00:16:56.842 "admin_qpairs": 0, 00:16:56.842 "io_qpairs": 0, 00:16:56.842 "current_admin_qpairs": 0, 00:16:56.842 "current_io_qpairs": 0, 00:16:56.842 "pending_bdev_io": 0, 00:16:56.842 "completed_nvme_io": 0, 00:16:56.842 "transports": [ 00:16:56.842 { 00:16:56.842 "trtype": "RDMA", 00:16:56.842 "pending_data_buffer": 0, 00:16:56.842 "devices": [ 00:16:56.842 { 00:16:56.842 "name": "mlx5_0", 00:16:56.842 "polls": 9885, 00:16:56.842 "idle_polls": 9885, 00:16:56.842 "completions": 0, 00:16:56.842 "requests": 0, 00:16:56.842 "request_latency": 0, 00:16:56.842 "pending_free_request": 0, 00:16:56.842 "pending_rdma_read": 0, 00:16:56.842 "pending_rdma_write": 0, 00:16:56.842 "pending_rdma_send": 0, 00:16:56.842 "total_send_wrs": 0, 00:16:56.842 "send_doorbell_updates": 0, 00:16:56.842 "total_recv_wrs": 4096, 00:16:56.842 "recv_doorbell_updates": 1 00:16:56.842 }, 00:16:56.842 { 00:16:56.842 "name": "mlx5_1", 00:16:56.842 "polls": 9885, 00:16:56.842 "idle_polls": 9885, 00:16:56.842 "completions": 0, 00:16:56.842 "requests": 0, 00:16:56.842 "request_latency": 0, 00:16:56.842 "pending_free_request": 0, 00:16:56.842 "pending_rdma_read": 0, 00:16:56.842 "pending_rdma_write": 0, 00:16:56.842 "pending_rdma_send": 0, 00:16:56.842 "total_send_wrs": 0, 00:16:56.842 "send_doorbell_updates": 0, 00:16:56.842 "total_recv_wrs": 4096, 00:16:56.842 "recv_doorbell_updates": 1 00:16:56.842 } 00:16:56.842 ] 00:16:56.842 } 00:16:56.842 ] 00:16:56.842 }, 00:16:56.842 { 00:16:56.842 "name": "nvmf_tgt_poll_group_002", 00:16:56.842 "admin_qpairs": 0, 00:16:56.842 "io_qpairs": 0, 00:16:56.842 "current_admin_qpairs": 0, 00:16:56.842 "current_io_qpairs": 0, 00:16:56.842 "pending_bdev_io": 0, 00:16:56.842 "completed_nvme_io": 0, 00:16:56.842 "transports": [ 00:16:56.842 { 00:16:56.842 "trtype": "RDMA", 00:16:56.842 "pending_data_buffer": 0, 00:16:56.842 "devices": [ 00:16:56.842 { 00:16:56.842 "name": "mlx5_0", 00:16:56.842 "polls": 5581, 00:16:56.842 "idle_polls": 5581, 00:16:56.842 "completions": 0, 00:16:56.842 "requests": 0, 00:16:56.842 "request_latency": 0, 00:16:56.842 "pending_free_request": 0, 00:16:56.842 "pending_rdma_read": 0, 00:16:56.842 "pending_rdma_write": 0, 00:16:56.842 "pending_rdma_send": 0, 00:16:56.842 "total_send_wrs": 0, 00:16:56.842 "send_doorbell_updates": 0, 00:16:56.842 "total_recv_wrs": 4096, 00:16:56.842 "recv_doorbell_updates": 1 00:16:56.842 }, 00:16:56.842 { 00:16:56.842 "name": "mlx5_1", 00:16:56.842 "polls": 5581, 00:16:56.842 "idle_polls": 5581, 00:16:56.842 "completions": 0, 00:16:56.842 "requests": 0, 00:16:56.842 "request_latency": 0, 00:16:56.842 "pending_free_request": 0, 00:16:56.842 "pending_rdma_read": 0, 00:16:56.842 "pending_rdma_write": 0, 00:16:56.842 "pending_rdma_send": 0, 00:16:56.842 "total_send_wrs": 0, 00:16:56.842 "send_doorbell_updates": 0, 00:16:56.842 "total_recv_wrs": 4096, 00:16:56.842 "recv_doorbell_updates": 1 00:16:56.842 } 00:16:56.842 ] 00:16:56.842 } 00:16:56.842 ] 00:16:56.842 }, 00:16:56.842 { 00:16:56.842 "name": "nvmf_tgt_poll_group_003", 00:16:56.842 "admin_qpairs": 0, 00:16:56.842 "io_qpairs": 0, 00:16:56.842 "current_admin_qpairs": 0, 00:16:56.842 "current_io_qpairs": 0, 00:16:56.842 "pending_bdev_io": 0, 00:16:56.842 "completed_nvme_io": 0, 00:16:56.842 "transports": [ 00:16:56.842 { 00:16:56.842 "trtype": "RDMA", 00:16:56.842 "pending_data_buffer": 0, 00:16:56.842 "devices": [ 00:16:56.842 { 00:16:56.842 "name": "mlx5_0", 00:16:56.842 "polls": 917, 00:16:56.842 "idle_polls": 917, 00:16:56.842 "completions": 0, 00:16:56.842 "requests": 0, 00:16:56.842 "request_latency": 0, 00:16:56.842 "pending_free_request": 0, 00:16:56.842 "pending_rdma_read": 0, 00:16:56.842 "pending_rdma_write": 0, 00:16:56.842 "pending_rdma_send": 0, 00:16:56.842 "total_send_wrs": 0, 00:16:56.842 "send_doorbell_updates": 0, 00:16:56.842 "total_recv_wrs": 4096, 00:16:56.842 "recv_doorbell_updates": 1 00:16:56.842 }, 00:16:56.842 { 00:16:56.842 "name": "mlx5_1", 00:16:56.842 "polls": 917, 00:16:56.842 "idle_polls": 917, 00:16:56.842 "completions": 0, 00:16:56.842 "requests": 0, 00:16:56.842 "request_latency": 0, 00:16:56.842 "pending_free_request": 0, 00:16:56.842 "pending_rdma_read": 0, 00:16:56.842 "pending_rdma_write": 0, 00:16:56.842 "pending_rdma_send": 0, 00:16:56.842 "total_send_wrs": 0, 00:16:56.842 "send_doorbell_updates": 0, 00:16:56.842 "total_recv_wrs": 4096, 00:16:56.842 "recv_doorbell_updates": 1 00:16:56.842 } 00:16:56.842 ] 00:16:56.842 } 00:16:56.842 ] 00:16:56.842 } 00:16:56.842 ] 00:16:56.842 }' 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:16:56.842 21:46:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:56.842 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:16:56.842 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:16:56.842 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:16:56.842 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:16:56.842 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:16:56.842 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:16:56.842 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:16:56.842 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:57.101 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:16:57.101 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:57.101 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:57.101 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:57.101 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.101 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.101 Malloc1 00:16:57.101 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.101 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:57.101 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.101 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.101 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.101 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:57.101 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.102 [2024-11-29 21:46:29.150510] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:16:57.102 [2024-11-29 21:46:29.196854] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:16:57.102 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:57.102 could not add new controller: failed to write to nvme-fabrics device 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.102 21:46:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:58.036 21:46:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:58.036 21:46:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:58.036 21:46:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:58.036 21:46:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:58.036 21:46:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:00.572 21:46:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:00.572 21:46:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:00.572 21:46:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.572 21:46:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:00.572 21:46:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.572 21:46:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:00.572 21:46:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:01.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:01.141 [2024-11-29 21:46:33.298561] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:17:01.141 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:01.141 could not add new controller: failed to write to nvme-fabrics device 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.141 21:46:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:02.520 21:46:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:02.520 21:46:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:02.520 21:46:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:02.520 21:46:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:02.520 21:46:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:04.430 21:46:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:04.430 21:46:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:04.430 21:46:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:04.430 21:46:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:04.430 21:46:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:04.430 21:46:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:04.430 21:46:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:05.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.367 [2024-11-29 21:46:37.366552] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.367 21:46:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:06.305 21:46:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:06.305 21:46:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:06.305 21:46:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.305 21:46:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:06.305 21:46:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:08.211 21:46:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:08.211 21:46:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:08.211 21:46:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:08.212 21:46:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:08.212 21:46:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:08.212 21:46:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:08.212 21:46:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.225 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:09.225 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:09.225 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.226 [2024-11-29 21:46:41.398628] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.226 21:46:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:10.162 21:46:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:10.162 21:46:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:10.162 21:46:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:10.162 21:46:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:10.162 21:46:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:12.697 21:46:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:12.697 21:46:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:12.697 21:46:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:12.697 21:46:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:12.697 21:46:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:12.697 21:46:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:12.697 21:46:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:13.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.276 [2024-11-29 21:46:45.444417] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.276 21:46:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:14.211 21:46:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:14.211 21:46:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:14.211 21:46:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.211 21:46:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:14.211 21:46:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:16.747 21:46:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:16.747 21:46:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:16.747 21:46:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.747 21:46:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:16.747 21:46:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.747 21:46:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:16.747 21:46:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:17.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.315 [2024-11-29 21:46:49.485611] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.315 21:46:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:18.251 21:46:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.251 21:46:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:18.251 21:46:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.251 21:46:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:18.251 21:46:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:20.785 21:46:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:20.785 21:46:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:20.785 21:46:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:20.785 21:46:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:20.785 21:46:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:20.785 21:46:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:20.785 21:46:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.355 [2024-11-29 21:46:53.522821] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.355 21:46:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:22.294 21:46:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:22.294 21:46:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:22.294 21:46:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.294 21:46:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:22.294 21:46:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:24.828 21:46:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:24.828 21:46:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:24.828 21:46:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.828 21:46:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:24.828 21:46:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.828 21:46:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:24.828 21:46:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:25.396 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.397 [2024-11-29 21:46:57.572252] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.397 [2024-11-29 21:46:57.620458] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.397 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.657 [2024-11-29 21:46:57.668624] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.657 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 [2024-11-29 21:46:57.716802] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 [2024-11-29 21:46:57.764971] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.658 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:25.658 "tick_rate": 2500000000, 00:17:25.658 "poll_groups": [ 00:17:25.658 { 00:17:25.658 "name": "nvmf_tgt_poll_group_000", 00:17:25.658 "admin_qpairs": 2, 00:17:25.658 "io_qpairs": 27, 00:17:25.658 "current_admin_qpairs": 0, 00:17:25.658 "current_io_qpairs": 0, 00:17:25.658 "pending_bdev_io": 0, 00:17:25.658 "completed_nvme_io": 77, 00:17:25.658 "transports": [ 00:17:25.658 { 00:17:25.658 "trtype": "RDMA", 00:17:25.658 "pending_data_buffer": 0, 00:17:25.658 "devices": [ 00:17:25.658 { 00:17:25.658 "name": "mlx5_0", 00:17:25.658 "polls": 3658056, 00:17:25.658 "idle_polls": 3657813, 00:17:25.658 "completions": 263, 00:17:25.658 "requests": 131, 00:17:25.658 "request_latency": 22402520, 00:17:25.658 "pending_free_request": 0, 00:17:25.658 "pending_rdma_read": 0, 00:17:25.658 "pending_rdma_write": 0, 00:17:25.658 "pending_rdma_send": 0, 00:17:25.658 "total_send_wrs": 207, 00:17:25.658 "send_doorbell_updates": 119, 00:17:25.658 "total_recv_wrs": 4227, 00:17:25.658 "recv_doorbell_updates": 119 00:17:25.658 }, 00:17:25.658 { 00:17:25.658 "name": "mlx5_1", 00:17:25.658 "polls": 3658056, 00:17:25.658 "idle_polls": 3658056, 00:17:25.658 "completions": 0, 00:17:25.658 "requests": 0, 00:17:25.658 "request_latency": 0, 00:17:25.658 "pending_free_request": 0, 00:17:25.658 "pending_rdma_read": 0, 00:17:25.658 "pending_rdma_write": 0, 00:17:25.658 "pending_rdma_send": 0, 00:17:25.658 "total_send_wrs": 0, 00:17:25.658 "send_doorbell_updates": 0, 00:17:25.659 "total_recv_wrs": 4096, 00:17:25.659 "recv_doorbell_updates": 1 00:17:25.659 } 00:17:25.659 ] 00:17:25.659 } 00:17:25.659 ] 00:17:25.659 }, 00:17:25.659 { 00:17:25.659 "name": "nvmf_tgt_poll_group_001", 00:17:25.659 "admin_qpairs": 2, 00:17:25.659 "io_qpairs": 26, 00:17:25.659 "current_admin_qpairs": 0, 00:17:25.659 "current_io_qpairs": 0, 00:17:25.659 "pending_bdev_io": 0, 00:17:25.659 "completed_nvme_io": 78, 00:17:25.659 "transports": [ 00:17:25.659 { 00:17:25.659 "trtype": "RDMA", 00:17:25.659 "pending_data_buffer": 0, 00:17:25.659 "devices": [ 00:17:25.659 { 00:17:25.659 "name": "mlx5_0", 00:17:25.659 "polls": 3496375, 00:17:25.659 "idle_polls": 3496133, 00:17:25.659 "completions": 262, 00:17:25.659 "requests": 131, 00:17:25.659 "request_latency": 21685190, 00:17:25.659 "pending_free_request": 0, 00:17:25.659 "pending_rdma_read": 0, 00:17:25.659 "pending_rdma_write": 0, 00:17:25.659 "pending_rdma_send": 0, 00:17:25.659 "total_send_wrs": 208, 00:17:25.659 "send_doorbell_updates": 121, 00:17:25.659 "total_recv_wrs": 4227, 00:17:25.659 "recv_doorbell_updates": 122 00:17:25.659 }, 00:17:25.659 { 00:17:25.659 "name": "mlx5_1", 00:17:25.659 "polls": 3496375, 00:17:25.659 "idle_polls": 3496375, 00:17:25.659 "completions": 0, 00:17:25.659 "requests": 0, 00:17:25.659 "request_latency": 0, 00:17:25.659 "pending_free_request": 0, 00:17:25.659 "pending_rdma_read": 0, 00:17:25.659 "pending_rdma_write": 0, 00:17:25.659 "pending_rdma_send": 0, 00:17:25.659 "total_send_wrs": 0, 00:17:25.659 "send_doorbell_updates": 0, 00:17:25.659 "total_recv_wrs": 4096, 00:17:25.659 "recv_doorbell_updates": 1 00:17:25.659 } 00:17:25.659 ] 00:17:25.659 } 00:17:25.659 ] 00:17:25.659 }, 00:17:25.659 { 00:17:25.659 "name": "nvmf_tgt_poll_group_002", 00:17:25.659 "admin_qpairs": 1, 00:17:25.659 "io_qpairs": 26, 00:17:25.659 "current_admin_qpairs": 0, 00:17:25.659 "current_io_qpairs": 0, 00:17:25.659 "pending_bdev_io": 0, 00:17:25.659 "completed_nvme_io": 124, 00:17:25.659 "transports": [ 00:17:25.659 { 00:17:25.659 "trtype": "RDMA", 00:17:25.659 "pending_data_buffer": 0, 00:17:25.659 "devices": [ 00:17:25.659 { 00:17:25.659 "name": "mlx5_0", 00:17:25.659 "polls": 3690391, 00:17:25.659 "idle_polls": 3690127, 00:17:25.659 "completions": 305, 00:17:25.659 "requests": 152, 00:17:25.659 "request_latency": 33049674, 00:17:25.659 "pending_free_request": 0, 00:17:25.659 "pending_rdma_read": 0, 00:17:25.659 "pending_rdma_write": 0, 00:17:25.659 "pending_rdma_send": 0, 00:17:25.659 "total_send_wrs": 264, 00:17:25.659 "send_doorbell_updates": 129, 00:17:25.659 "total_recv_wrs": 4248, 00:17:25.659 "recv_doorbell_updates": 129 00:17:25.659 }, 00:17:25.659 { 00:17:25.659 "name": "mlx5_1", 00:17:25.659 "polls": 3690391, 00:17:25.659 "idle_polls": 3690391, 00:17:25.659 "completions": 0, 00:17:25.659 "requests": 0, 00:17:25.659 "request_latency": 0, 00:17:25.659 "pending_free_request": 0, 00:17:25.659 "pending_rdma_read": 0, 00:17:25.659 "pending_rdma_write": 0, 00:17:25.659 "pending_rdma_send": 0, 00:17:25.659 "total_send_wrs": 0, 00:17:25.659 "send_doorbell_updates": 0, 00:17:25.659 "total_recv_wrs": 4096, 00:17:25.659 "recv_doorbell_updates": 1 00:17:25.659 } 00:17:25.659 ] 00:17:25.659 } 00:17:25.659 ] 00:17:25.659 }, 00:17:25.659 { 00:17:25.659 "name": "nvmf_tgt_poll_group_003", 00:17:25.659 "admin_qpairs": 2, 00:17:25.659 "io_qpairs": 26, 00:17:25.659 "current_admin_qpairs": 0, 00:17:25.659 "current_io_qpairs": 0, 00:17:25.659 "pending_bdev_io": 0, 00:17:25.659 "completed_nvme_io": 176, 00:17:25.659 "transports": [ 00:17:25.659 { 00:17:25.659 "trtype": "RDMA", 00:17:25.659 "pending_data_buffer": 0, 00:17:25.659 "devices": [ 00:17:25.659 { 00:17:25.659 "name": "mlx5_0", 00:17:25.659 "polls": 2807841, 00:17:25.659 "idle_polls": 2807448, 00:17:25.659 "completions": 456, 00:17:25.659 "requests": 228, 00:17:25.659 "request_latency": 51256966, 00:17:25.659 "pending_free_request": 0, 00:17:25.659 "pending_rdma_read": 0, 00:17:25.659 "pending_rdma_write": 0, 00:17:25.659 "pending_rdma_send": 0, 00:17:25.659 "total_send_wrs": 402, 00:17:25.659 "send_doorbell_updates": 191, 00:17:25.659 "total_recv_wrs": 4324, 00:17:25.659 "recv_doorbell_updates": 192 00:17:25.659 }, 00:17:25.659 { 00:17:25.659 "name": "mlx5_1", 00:17:25.659 "polls": 2807841, 00:17:25.659 "idle_polls": 2807841, 00:17:25.659 "completions": 0, 00:17:25.659 "requests": 0, 00:17:25.659 "request_latency": 0, 00:17:25.659 "pending_free_request": 0, 00:17:25.659 "pending_rdma_read": 0, 00:17:25.659 "pending_rdma_write": 0, 00:17:25.659 "pending_rdma_send": 0, 00:17:25.659 "total_send_wrs": 0, 00:17:25.659 "send_doorbell_updates": 0, 00:17:25.659 "total_recv_wrs": 4096, 00:17:25.659 "recv_doorbell_updates": 1 00:17:25.659 } 00:17:25.659 ] 00:17:25.659 } 00:17:25.659 ] 00:17:25.659 } 00:17:25.659 ] 00:17:25.659 }' 00:17:25.659 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:25.659 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:25.659 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:25.659 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:25.659 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:25.659 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:25.659 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:25.659 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:25.919 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:25.919 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:17:25.919 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:17:25.919 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:17:25.919 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:17:25.919 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:17:25.919 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:25.919 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1286 > 0 )) 00:17:25.919 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:17:25.919 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:17:25.919 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:17:25.919 21:46:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 128394350 > 0 )) 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:25.919 rmmod nvme_rdma 00:17:25.919 rmmod nvme_fabrics 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 3016799 ']' 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 3016799 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3016799 ']' 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3016799 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3016799 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3016799' 00:17:25.919 killing process with pid 3016799 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3016799 00:17:25.919 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3016799 00:17:26.177 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:26.177 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:17:26.177 00:17:26.177 real 0m37.024s 00:17:26.177 user 2m1.628s 00:17:26.177 sys 0m6.823s 00:17:26.177 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:26.177 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.177 ************************************ 00:17:26.177 END TEST nvmf_rpc 00:17:26.177 ************************************ 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:26.436 ************************************ 00:17:26.436 START TEST nvmf_invalid 00:17:26.436 ************************************ 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:17:26.436 * Looking for test storage... 00:17:26.436 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.436 --rc genhtml_branch_coverage=1 00:17:26.436 --rc genhtml_function_coverage=1 00:17:26.436 --rc genhtml_legend=1 00:17:26.436 --rc geninfo_all_blocks=1 00:17:26.436 --rc geninfo_unexecuted_blocks=1 00:17:26.436 00:17:26.436 ' 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.436 --rc genhtml_branch_coverage=1 00:17:26.436 --rc genhtml_function_coverage=1 00:17:26.436 --rc genhtml_legend=1 00:17:26.436 --rc geninfo_all_blocks=1 00:17:26.436 --rc geninfo_unexecuted_blocks=1 00:17:26.436 00:17:26.436 ' 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.436 --rc genhtml_branch_coverage=1 00:17:26.436 --rc genhtml_function_coverage=1 00:17:26.436 --rc genhtml_legend=1 00:17:26.436 --rc geninfo_all_blocks=1 00:17:26.436 --rc geninfo_unexecuted_blocks=1 00:17:26.436 00:17:26.436 ' 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.436 --rc genhtml_branch_coverage=1 00:17:26.436 --rc genhtml_function_coverage=1 00:17:26.436 --rc genhtml_legend=1 00:17:26.436 --rc geninfo_all_blocks=1 00:17:26.436 --rc geninfo_unexecuted_blocks=1 00:17:26.436 00:17:26.436 ' 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.436 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.695 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:26.695 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:26.695 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.695 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.695 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:26.696 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:26.696 21:46:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:33.259 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.259 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:33.259 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:33.259 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:33.259 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:33.259 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:33.259 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:33.259 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:33.259 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:33.260 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:33.260 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:33.260 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:33.260 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # rdma_device_init 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@526 -- # allocate_nic_ips 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.260 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:33.261 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.261 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:33.261 altname enp217s0f0np0 00:17:33.261 altname ens818f0np0 00:17:33.261 inet 192.168.100.8/24 scope global mlx_0_0 00:17:33.261 valid_lft forever preferred_lft forever 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:33.261 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.261 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:33.261 altname enp217s0f1np1 00:17:33.261 altname ens818f1np1 00:17:33.261 inet 192.168.100.9/24 scope global mlx_0_1 00:17:33.261 valid_lft forever preferred_lft forever 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:17:33.261 192.168.100.9' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:17:33.261 192.168.100.9' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # head -n 1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:17:33.261 192.168.100.9' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # tail -n +2 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # head -n 1 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=3025413 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 3025413 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3025413 ']' 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.261 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:33.262 [2024-11-29 21:47:05.394200] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:33.262 [2024-11-29 21:47:05.394253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.262 [2024-11-29 21:47:05.463417] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.262 [2024-11-29 21:47:05.503231] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.262 [2024-11-29 21:47:05.503276] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.262 [2024-11-29 21:47:05.503285] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.262 [2024-11-29 21:47:05.503294] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.262 [2024-11-29 21:47:05.503301] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.262 [2024-11-29 21:47:05.503353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.262 [2024-11-29 21:47:05.503423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.262 [2024-11-29 21:47:05.503510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.262 [2024-11-29 21:47:05.503512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.520 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.520 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:33.520 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:33.520 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:33.520 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:33.520 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.520 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:33.520 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13443 00:17:33.779 [2024-11-29 21:47:05.818693] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:33.779 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:33.779 { 00:17:33.779 "nqn": "nqn.2016-06.io.spdk:cnode13443", 00:17:33.779 "tgt_name": "foobar", 00:17:33.779 "method": "nvmf_create_subsystem", 00:17:33.779 "req_id": 1 00:17:33.779 } 00:17:33.779 Got JSON-RPC error response 00:17:33.779 response: 00:17:33.779 { 00:17:33.779 "code": -32603, 00:17:33.779 "message": "Unable to find target foobar" 00:17:33.779 }' 00:17:33.779 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:33.779 { 00:17:33.779 "nqn": "nqn.2016-06.io.spdk:cnode13443", 00:17:33.779 "tgt_name": "foobar", 00:17:33.779 "method": "nvmf_create_subsystem", 00:17:33.779 "req_id": 1 00:17:33.779 } 00:17:33.779 Got JSON-RPC error response 00:17:33.779 response: 00:17:33.779 { 00:17:33.779 "code": -32603, 00:17:33.779 "message": "Unable to find target foobar" 00:17:33.779 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:33.779 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:33.779 21:47:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26506 00:17:33.779 [2024-11-29 21:47:06.023395] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26506: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:34.038 { 00:17:34.038 "nqn": "nqn.2016-06.io.spdk:cnode26506", 00:17:34.038 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:34.038 "method": "nvmf_create_subsystem", 00:17:34.038 "req_id": 1 00:17:34.038 } 00:17:34.038 Got JSON-RPC error response 00:17:34.038 response: 00:17:34.038 { 00:17:34.038 "code": -32602, 00:17:34.038 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:34.038 }' 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:34.038 { 00:17:34.038 "nqn": "nqn.2016-06.io.spdk:cnode26506", 00:17:34.038 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:34.038 "method": "nvmf_create_subsystem", 00:17:34.038 "req_id": 1 00:17:34.038 } 00:17:34.038 Got JSON-RPC error response 00:17:34.038 response: 00:17:34.038 { 00:17:34.038 "code": -32602, 00:17:34.038 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:34.038 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2957 00:17:34.038 [2024-11-29 21:47:06.236088] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2957: invalid model number 'SPDK_Controller' 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:34.038 { 00:17:34.038 "nqn": "nqn.2016-06.io.spdk:cnode2957", 00:17:34.038 "model_number": "SPDK_Controller\u001f", 00:17:34.038 "method": "nvmf_create_subsystem", 00:17:34.038 "req_id": 1 00:17:34.038 } 00:17:34.038 Got JSON-RPC error response 00:17:34.038 response: 00:17:34.038 { 00:17:34.038 "code": -32602, 00:17:34.038 "message": "Invalid MN SPDK_Controller\u001f" 00:17:34.038 }' 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:34.038 { 00:17:34.038 "nqn": "nqn.2016-06.io.spdk:cnode2957", 00:17:34.038 "model_number": "SPDK_Controller\u001f", 00:17:34.038 "method": "nvmf_create_subsystem", 00:17:34.038 "req_id": 1 00:17:34.038 } 00:17:34.038 Got JSON-RPC error response 00:17:34.038 response: 00:17:34.038 { 00:17:34.038 "code": -32602, 00:17:34.038 "message": "Invalid MN SPDK_Controller\u001f" 00:17:34.038 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.038 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:34.298 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ y == \- ]] 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'yqr[Xwb&\(yoWN%H@-G8`' 00:17:34.299 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'yqr[Xwb&\(yoWN%H@-G8`' nqn.2016-06.io.spdk:cnode14608 00:17:34.558 [2024-11-29 21:47:06.617378] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14608: invalid serial number 'yqr[Xwb&\(yoWN%H@-G8`' 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:34.558 { 00:17:34.558 "nqn": "nqn.2016-06.io.spdk:cnode14608", 00:17:34.558 "serial_number": "yqr[Xwb&\\(yoWN%H@-G8`", 00:17:34.558 "method": "nvmf_create_subsystem", 00:17:34.558 "req_id": 1 00:17:34.558 } 00:17:34.558 Got JSON-RPC error response 00:17:34.558 response: 00:17:34.558 { 00:17:34.558 "code": -32602, 00:17:34.558 "message": "Invalid SN yqr[Xwb&\\(yoWN%H@-G8`" 00:17:34.558 }' 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:34.558 { 00:17:34.558 "nqn": "nqn.2016-06.io.spdk:cnode14608", 00:17:34.558 "serial_number": "yqr[Xwb&\\(yoWN%H@-G8`", 00:17:34.558 "method": "nvmf_create_subsystem", 00:17:34.558 "req_id": 1 00:17:34.558 } 00:17:34.558 Got JSON-RPC error response 00:17:34.558 response: 00:17:34.558 { 00:17:34.558 "code": -32602, 00:17:34.558 "message": "Invalid SN yqr[Xwb&\\(yoWN%H@-G8`" 00:17:34.558 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.558 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.559 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:34.819 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 't;H"`h#AU`o;o6mjhhAw,n&2+Kgn`_g2@Vq~(v^Gy' 00:17:34.820 21:47:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 't;H"`h#AU`o;o6mjhhAw,n&2+Kgn`_g2@Vq~(v^Gy' nqn.2016-06.io.spdk:cnode804 00:17:35.079 [2024-11-29 21:47:07.135046] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode804: invalid model number 't;H"`h#AU`o;o6mjhhAw,n&2+Kgn`_g2@Vq~(v^Gy' 00:17:35.079 21:47:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:35.079 { 00:17:35.079 "nqn": "nqn.2016-06.io.spdk:cnode804", 00:17:35.079 "model_number": "t;H\"`h#AU`o;o6mjhhAw,n&2+Kgn`_g2@Vq~(v^Gy", 00:17:35.079 "method": "nvmf_create_subsystem", 00:17:35.079 "req_id": 1 00:17:35.079 } 00:17:35.079 Got JSON-RPC error response 00:17:35.079 response: 00:17:35.079 { 00:17:35.079 "code": -32602, 00:17:35.079 "message": "Invalid MN t;H\"`h#AU`o;o6mjhhAw,n&2+Kgn`_g2@Vq~(v^Gy" 00:17:35.079 }' 00:17:35.079 21:47:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:35.079 { 00:17:35.079 "nqn": "nqn.2016-06.io.spdk:cnode804", 00:17:35.079 "model_number": "t;H\"`h#AU`o;o6mjhhAw,n&2+Kgn`_g2@Vq~(v^Gy", 00:17:35.079 "method": "nvmf_create_subsystem", 00:17:35.079 "req_id": 1 00:17:35.079 } 00:17:35.079 Got JSON-RPC error response 00:17:35.079 response: 00:17:35.079 { 00:17:35.079 "code": -32602, 00:17:35.079 "message": "Invalid MN t;H\"`h#AU`o;o6mjhhAw,n&2+Kgn`_g2@Vq~(v^Gy" 00:17:35.079 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:35.079 21:47:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:17:35.338 [2024-11-29 21:47:07.346772] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a81b60/0x1a86030) succeed. 00:17:35.338 [2024-11-29 21:47:07.357019] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a83150/0x1ac76d0) succeed. 00:17:35.338 21:47:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:35.597 21:47:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:17:35.597 21:47:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:17:35.597 192.168.100.9' 00:17:35.597 21:47:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:35.597 21:47:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:17:35.597 21:47:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:17:35.857 [2024-11-29 21:47:07.873519] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:35.857 21:47:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:35.857 { 00:17:35.857 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:35.857 "listen_address": { 00:17:35.857 "trtype": "rdma", 00:17:35.857 "traddr": "192.168.100.8", 00:17:35.857 "trsvcid": "4421" 00:17:35.857 }, 00:17:35.857 "method": "nvmf_subsystem_remove_listener", 00:17:35.857 "req_id": 1 00:17:35.857 } 00:17:35.857 Got JSON-RPC error response 00:17:35.857 response: 00:17:35.857 { 00:17:35.857 "code": -32602, 00:17:35.857 "message": "Invalid parameters" 00:17:35.857 }' 00:17:35.857 21:47:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:35.857 { 00:17:35.857 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:35.857 "listen_address": { 00:17:35.857 "trtype": "rdma", 00:17:35.857 "traddr": "192.168.100.8", 00:17:35.857 "trsvcid": "4421" 00:17:35.857 }, 00:17:35.857 "method": "nvmf_subsystem_remove_listener", 00:17:35.857 "req_id": 1 00:17:35.857 } 00:17:35.857 Got JSON-RPC error response 00:17:35.857 response: 00:17:35.857 { 00:17:35.857 "code": -32602, 00:17:35.857 "message": "Invalid parameters" 00:17:35.857 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:35.857 21:47:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19006 -i 0 00:17:35.857 [2024-11-29 21:47:08.082258] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19006: invalid cntlid range [0-65519] 00:17:36.117 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:36.117 { 00:17:36.117 "nqn": "nqn.2016-06.io.spdk:cnode19006", 00:17:36.117 "min_cntlid": 0, 00:17:36.117 "method": "nvmf_create_subsystem", 00:17:36.117 "req_id": 1 00:17:36.117 } 00:17:36.117 Got JSON-RPC error response 00:17:36.117 response: 00:17:36.117 { 00:17:36.117 "code": -32602, 00:17:36.117 "message": "Invalid cntlid range [0-65519]" 00:17:36.117 }' 00:17:36.117 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:36.117 { 00:17:36.117 "nqn": "nqn.2016-06.io.spdk:cnode19006", 00:17:36.117 "min_cntlid": 0, 00:17:36.117 "method": "nvmf_create_subsystem", 00:17:36.117 "req_id": 1 00:17:36.117 } 00:17:36.117 Got JSON-RPC error response 00:17:36.117 response: 00:17:36.117 { 00:17:36.117 "code": -32602, 00:17:36.117 "message": "Invalid cntlid range [0-65519]" 00:17:36.117 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:36.117 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8438 -i 65520 00:17:36.117 [2024-11-29 21:47:08.286985] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8438: invalid cntlid range [65520-65519] 00:17:36.117 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:36.117 { 00:17:36.117 "nqn": "nqn.2016-06.io.spdk:cnode8438", 00:17:36.117 "min_cntlid": 65520, 00:17:36.117 "method": "nvmf_create_subsystem", 00:17:36.117 "req_id": 1 00:17:36.117 } 00:17:36.117 Got JSON-RPC error response 00:17:36.117 response: 00:17:36.117 { 00:17:36.117 "code": -32602, 00:17:36.117 "message": "Invalid cntlid range [65520-65519]" 00:17:36.117 }' 00:17:36.117 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:36.117 { 00:17:36.117 "nqn": "nqn.2016-06.io.spdk:cnode8438", 00:17:36.117 "min_cntlid": 65520, 00:17:36.117 "method": "nvmf_create_subsystem", 00:17:36.117 "req_id": 1 00:17:36.117 } 00:17:36.117 Got JSON-RPC error response 00:17:36.117 response: 00:17:36.117 { 00:17:36.117 "code": -32602, 00:17:36.117 "message": "Invalid cntlid range [65520-65519]" 00:17:36.117 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:36.117 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1294 -I 0 00:17:36.376 [2024-11-29 21:47:08.487724] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1294: invalid cntlid range [1-0] 00:17:36.376 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:36.376 { 00:17:36.376 "nqn": "nqn.2016-06.io.spdk:cnode1294", 00:17:36.376 "max_cntlid": 0, 00:17:36.376 "method": "nvmf_create_subsystem", 00:17:36.377 "req_id": 1 00:17:36.377 } 00:17:36.377 Got JSON-RPC error response 00:17:36.377 response: 00:17:36.377 { 00:17:36.377 "code": -32602, 00:17:36.377 "message": "Invalid cntlid range [1-0]" 00:17:36.377 }' 00:17:36.377 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:36.377 { 00:17:36.377 "nqn": "nqn.2016-06.io.spdk:cnode1294", 00:17:36.377 "max_cntlid": 0, 00:17:36.377 "method": "nvmf_create_subsystem", 00:17:36.377 "req_id": 1 00:17:36.377 } 00:17:36.377 Got JSON-RPC error response 00:17:36.377 response: 00:17:36.377 { 00:17:36.377 "code": -32602, 00:17:36.377 "message": "Invalid cntlid range [1-0]" 00:17:36.377 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:36.377 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2718 -I 65520 00:17:36.635 [2024-11-29 21:47:08.672398] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2718: invalid cntlid range [1-65520] 00:17:36.635 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:36.635 { 00:17:36.635 "nqn": "nqn.2016-06.io.spdk:cnode2718", 00:17:36.635 "max_cntlid": 65520, 00:17:36.635 "method": "nvmf_create_subsystem", 00:17:36.635 "req_id": 1 00:17:36.635 } 00:17:36.635 Got JSON-RPC error response 00:17:36.635 response: 00:17:36.635 { 00:17:36.635 "code": -32602, 00:17:36.635 "message": "Invalid cntlid range [1-65520]" 00:17:36.635 }' 00:17:36.635 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:36.635 { 00:17:36.635 "nqn": "nqn.2016-06.io.spdk:cnode2718", 00:17:36.635 "max_cntlid": 65520, 00:17:36.636 "method": "nvmf_create_subsystem", 00:17:36.636 "req_id": 1 00:17:36.636 } 00:17:36.636 Got JSON-RPC error response 00:17:36.636 response: 00:17:36.636 { 00:17:36.636 "code": -32602, 00:17:36.636 "message": "Invalid cntlid range [1-65520]" 00:17:36.636 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:36.636 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3332 -i 6 -I 5 00:17:36.636 [2024-11-29 21:47:08.873076] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3332: invalid cntlid range [6-5] 00:17:36.894 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:36.894 { 00:17:36.894 "nqn": "nqn.2016-06.io.spdk:cnode3332", 00:17:36.894 "min_cntlid": 6, 00:17:36.894 "max_cntlid": 5, 00:17:36.894 "method": "nvmf_create_subsystem", 00:17:36.894 "req_id": 1 00:17:36.894 } 00:17:36.894 Got JSON-RPC error response 00:17:36.894 response: 00:17:36.894 { 00:17:36.894 "code": -32602, 00:17:36.894 "message": "Invalid cntlid range [6-5]" 00:17:36.894 }' 00:17:36.894 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:36.894 { 00:17:36.894 "nqn": "nqn.2016-06.io.spdk:cnode3332", 00:17:36.894 "min_cntlid": 6, 00:17:36.894 "max_cntlid": 5, 00:17:36.894 "method": "nvmf_create_subsystem", 00:17:36.894 "req_id": 1 00:17:36.894 } 00:17:36.894 Got JSON-RPC error response 00:17:36.894 response: 00:17:36.894 { 00:17:36.894 "code": -32602, 00:17:36.894 "message": "Invalid cntlid range [6-5]" 00:17:36.894 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:36.894 21:47:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:36.894 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:36.894 { 00:17:36.894 "name": "foobar", 00:17:36.894 "method": "nvmf_delete_target", 00:17:36.894 "req_id": 1 00:17:36.894 } 00:17:36.894 Got JSON-RPC error response 00:17:36.894 response: 00:17:36.894 { 00:17:36.894 "code": -32602, 00:17:36.894 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:36.894 }' 00:17:36.894 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:36.894 { 00:17:36.894 "name": "foobar", 00:17:36.894 "method": "nvmf_delete_target", 00:17:36.894 "req_id": 1 00:17:36.894 } 00:17:36.895 Got JSON-RPC error response 00:17:36.895 response: 00:17:36.895 { 00:17:36.895 "code": -32602, 00:17:36.895 "message": "The specified target doesn't exist, cannot delete it." 00:17:36.895 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:36.895 rmmod nvme_rdma 00:17:36.895 rmmod nvme_fabrics 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 3025413 ']' 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 3025413 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3025413 ']' 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3025413 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3025413 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3025413' 00:17:36.895 killing process with pid 3025413 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3025413 00:17:36.895 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3025413 00:17:37.154 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:37.154 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:17:37.154 00:17:37.154 real 0m10.884s 00:17:37.154 user 0m19.560s 00:17:37.154 sys 0m6.266s 00:17:37.154 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.154 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:37.154 ************************************ 00:17:37.154 END TEST nvmf_invalid 00:17:37.154 ************************************ 00:17:37.414 21:47:09 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.415 ************************************ 00:17:37.415 START TEST nvmf_connect_stress 00:17:37.415 ************************************ 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:17:37.415 * Looking for test storage... 00:17:37.415 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:37.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.415 --rc genhtml_branch_coverage=1 00:17:37.415 --rc genhtml_function_coverage=1 00:17:37.415 --rc genhtml_legend=1 00:17:37.415 --rc geninfo_all_blocks=1 00:17:37.415 --rc geninfo_unexecuted_blocks=1 00:17:37.415 00:17:37.415 ' 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:37.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.415 --rc genhtml_branch_coverage=1 00:17:37.415 --rc genhtml_function_coverage=1 00:17:37.415 --rc genhtml_legend=1 00:17:37.415 --rc geninfo_all_blocks=1 00:17:37.415 --rc geninfo_unexecuted_blocks=1 00:17:37.415 00:17:37.415 ' 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:37.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.415 --rc genhtml_branch_coverage=1 00:17:37.415 --rc genhtml_function_coverage=1 00:17:37.415 --rc genhtml_legend=1 00:17:37.415 --rc geninfo_all_blocks=1 00:17:37.415 --rc geninfo_unexecuted_blocks=1 00:17:37.415 00:17:37.415 ' 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:37.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.415 --rc genhtml_branch_coverage=1 00:17:37.415 --rc genhtml_function_coverage=1 00:17:37.415 --rc genhtml_legend=1 00:17:37.415 --rc geninfo_all_blocks=1 00:17:37.415 --rc geninfo_unexecuted_blocks=1 00:17:37.415 00:17:37.415 ' 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.415 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.675 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:37.675 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:37.676 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:37.676 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.676 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.676 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.676 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:37.676 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:37.676 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:37.676 21:47:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:44.248 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:44.248 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:44.248 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:44.248 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:44.248 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # rdma_device_init 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@526 -- # allocate_nic_ips 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:44.249 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:44.249 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:44.249 altname enp217s0f0np0 00:17:44.249 altname ens818f0np0 00:17:44.249 inet 192.168.100.8/24 scope global mlx_0_0 00:17:44.249 valid_lft forever preferred_lft forever 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:44.249 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:44.249 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:44.249 altname enp217s0f1np1 00:17:44.249 altname ens818f1np1 00:17:44.249 inet 192.168.100.9/24 scope global mlx_0_1 00:17:44.249 valid_lft forever preferred_lft forever 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:17:44.249 192.168.100.9' 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:17:44.249 192.168.100.9' 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # head -n 1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:17:44.249 192.168.100.9' 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # tail -n +2 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # head -n 1 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:17:44.249 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=3029553 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 3029553 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3029553 ']' 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.250 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.250 [2024-11-29 21:47:16.402662] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:44.250 [2024-11-29 21:47:16.402731] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.250 [2024-11-29 21:47:16.471707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:44.626 [2024-11-29 21:47:16.511413] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.626 [2024-11-29 21:47:16.511452] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.626 [2024-11-29 21:47:16.511463] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.626 [2024-11-29 21:47:16.511471] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.626 [2024-11-29 21:47:16.511479] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.626 [2024-11-29 21:47:16.511584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.626 [2024-11-29 21:47:16.511688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.626 [2024-11-29 21:47:16.511690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.626 [2024-11-29 21:47:16.699720] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16f5710/0x16f9bc0) succeed. 00:17:44.626 [2024-11-29 21:47:16.710653] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16f6c60/0x173b260) succeed. 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.626 [2024-11-29 21:47:16.817959] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.626 NULL1 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3029576 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.626 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.886 21:47:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.145 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.145 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:45.145 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.145 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.145 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.405 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.405 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:45.405 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.405 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.405 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.973 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.973 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:45.973 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.973 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.973 21:47:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.232 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.232 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:46.232 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.232 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.232 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.491 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.491 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:46.491 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.491 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.491 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.751 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.751 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:46.751 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.751 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.751 21:47:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.010 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.010 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:47.010 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.010 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.010 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.578 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.578 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:47.578 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.578 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.578 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.837 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.837 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:47.837 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.837 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.837 21:47:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.096 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.096 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:48.096 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.096 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.096 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.355 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.355 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:48.355 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.355 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.355 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.614 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.614 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:48.614 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.614 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.614 21:47:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.181 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.181 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:49.181 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.181 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.181 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.440 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.440 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:49.440 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.440 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.440 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.699 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.699 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:49.699 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.699 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.699 21:47:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.958 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.958 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:49.958 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.958 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.958 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.525 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.525 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:50.525 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.525 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.525 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.784 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.784 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:50.784 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.784 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.784 21:47:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.043 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.043 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:51.043 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.043 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.043 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.302 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.302 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:51.302 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.302 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.302 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.561 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.561 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:51.561 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.561 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.561 21:47:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.129 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.129 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:52.129 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.129 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.129 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.388 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.388 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:52.388 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.388 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.388 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.647 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.647 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:52.647 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.647 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.647 21:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.906 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.906 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:52.906 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.906 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.906 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.473 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.473 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:53.473 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.473 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.473 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.731 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.731 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:53.731 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.732 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.732 21:47:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.991 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.991 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:53.991 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.991 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.991 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.250 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.250 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:54.250 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.250 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.250 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.509 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.509 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:54.509 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.509 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.509 21:47:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.078 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:55.078 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.078 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3029576 00:17:55.079 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3029576) - No such process 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3029576 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:55.079 rmmod nvme_rdma 00:17:55.079 rmmod nvme_fabrics 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 3029553 ']' 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 3029553 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3029553 ']' 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3029553 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3029553 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3029553' 00:17:55.079 killing process with pid 3029553 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3029553 00:17:55.079 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3029553 00:17:55.338 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:55.338 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:17:55.338 00:17:55.338 real 0m17.973s 00:17:55.338 user 0m40.019s 00:17:55.338 sys 0m7.716s 00:17:55.338 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:55.338 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.338 ************************************ 00:17:55.338 END TEST nvmf_connect_stress 00:17:55.338 ************************************ 00:17:55.338 21:47:27 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:17:55.338 21:47:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:55.338 21:47:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:55.338 21:47:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:55.338 ************************************ 00:17:55.338 START TEST nvmf_fused_ordering 00:17:55.338 ************************************ 00:17:55.338 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:17:55.598 * Looking for test storage... 00:17:55.598 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:55.598 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:55.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.599 --rc genhtml_branch_coverage=1 00:17:55.599 --rc genhtml_function_coverage=1 00:17:55.599 --rc genhtml_legend=1 00:17:55.599 --rc geninfo_all_blocks=1 00:17:55.599 --rc geninfo_unexecuted_blocks=1 00:17:55.599 00:17:55.599 ' 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:55.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.599 --rc genhtml_branch_coverage=1 00:17:55.599 --rc genhtml_function_coverage=1 00:17:55.599 --rc genhtml_legend=1 00:17:55.599 --rc geninfo_all_blocks=1 00:17:55.599 --rc geninfo_unexecuted_blocks=1 00:17:55.599 00:17:55.599 ' 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:55.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.599 --rc genhtml_branch_coverage=1 00:17:55.599 --rc genhtml_function_coverage=1 00:17:55.599 --rc genhtml_legend=1 00:17:55.599 --rc geninfo_all_blocks=1 00:17:55.599 --rc geninfo_unexecuted_blocks=1 00:17:55.599 00:17:55.599 ' 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:55.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.599 --rc genhtml_branch_coverage=1 00:17:55.599 --rc genhtml_function_coverage=1 00:17:55.599 --rc genhtml_legend=1 00:17:55.599 --rc geninfo_all_blocks=1 00:17:55.599 --rc geninfo_unexecuted_blocks=1 00:17:55.599 00:17:55.599 ' 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:55.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:55.599 21:47:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.173 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:02.174 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:02.174 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:02.174 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:02.174 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # rdma_device_init 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@526 -- # allocate_nic_ips 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:02.174 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:02.174 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:02.174 altname enp217s0f0np0 00:18:02.174 altname ens818f0np0 00:18:02.174 inet 192.168.100.8/24 scope global mlx_0_0 00:18:02.174 valid_lft forever preferred_lft forever 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:02.174 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:02.174 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:02.174 altname enp217s0f1np1 00:18:02.174 altname ens818f1np1 00:18:02.174 inet 192.168.100.9/24 scope global mlx_0_1 00:18:02.174 valid_lft forever preferred_lft forever 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:18:02.174 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:18:02.175 192.168.100.9' 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:18:02.175 192.168.100.9' 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # head -n 1 00:18:02.175 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:18:02.434 192.168.100.9' 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # tail -n +2 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # head -n 1 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=3034629 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 3034629 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3034629 ']' 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.434 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:02.434 [2024-11-29 21:47:34.510566] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:02.434 [2024-11-29 21:47:34.510623] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.434 [2024-11-29 21:47:34.580277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.434 [2024-11-29 21:47:34.617965] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.434 [2024-11-29 21:47:34.618007] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.434 [2024-11-29 21:47:34.618016] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.434 [2024-11-29 21:47:34.618025] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.434 [2024-11-29 21:47:34.618032] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.434 [2024-11-29 21:47:34.618054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.694 [2024-11-29 21:47:34.771890] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdcd290/0xdd1740) succeed. 00:18:02.694 [2024-11-29 21:47:34.780828] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdce740/0xe12de0) succeed. 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.694 [2024-11-29 21:47:34.836201] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.694 NULL1 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.694 21:47:34 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:02.694 [2024-11-29 21:47:34.892355] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:02.694 [2024-11-29 21:47:34.892410] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034837 ] 00:18:02.956 Attached to nqn.2016-06.io.spdk:cnode1 00:18:02.956 Namespace ID: 1 size: 1GB 00:18:02.956 fused_ordering(0) 00:18:02.956 fused_ordering(1) 00:18:02.956 fused_ordering(2) 00:18:02.956 fused_ordering(3) 00:18:02.956 fused_ordering(4) 00:18:02.956 fused_ordering(5) 00:18:02.956 fused_ordering(6) 00:18:02.956 fused_ordering(7) 00:18:02.956 fused_ordering(8) 00:18:02.956 fused_ordering(9) 00:18:02.956 fused_ordering(10) 00:18:02.956 fused_ordering(11) 00:18:02.956 fused_ordering(12) 00:18:02.956 fused_ordering(13) 00:18:02.956 fused_ordering(14) 00:18:02.956 fused_ordering(15) 00:18:02.956 fused_ordering(16) 00:18:02.956 fused_ordering(17) 00:18:02.956 fused_ordering(18) 00:18:02.956 fused_ordering(19) 00:18:02.956 fused_ordering(20) 00:18:02.956 fused_ordering(21) 00:18:02.956 fused_ordering(22) 00:18:02.956 fused_ordering(23) 00:18:02.956 fused_ordering(24) 00:18:02.956 fused_ordering(25) 00:18:02.956 fused_ordering(26) 00:18:02.956 fused_ordering(27) 00:18:02.956 fused_ordering(28) 00:18:02.956 fused_ordering(29) 00:18:02.956 fused_ordering(30) 00:18:02.956 fused_ordering(31) 00:18:02.956 fused_ordering(32) 00:18:02.956 fused_ordering(33) 00:18:02.956 fused_ordering(34) 00:18:02.956 fused_ordering(35) 00:18:02.956 fused_ordering(36) 00:18:02.956 fused_ordering(37) 00:18:02.956 fused_ordering(38) 00:18:02.956 fused_ordering(39) 00:18:02.956 fused_ordering(40) 00:18:02.956 fused_ordering(41) 00:18:02.956 fused_ordering(42) 00:18:02.956 fused_ordering(43) 00:18:02.956 fused_ordering(44) 00:18:02.956 fused_ordering(45) 00:18:02.956 fused_ordering(46) 00:18:02.956 fused_ordering(47) 00:18:02.956 fused_ordering(48) 00:18:02.956 fused_ordering(49) 00:18:02.956 fused_ordering(50) 00:18:02.956 fused_ordering(51) 00:18:02.956 fused_ordering(52) 00:18:02.956 fused_ordering(53) 00:18:02.956 fused_ordering(54) 00:18:02.956 fused_ordering(55) 00:18:02.956 fused_ordering(56) 00:18:02.956 fused_ordering(57) 00:18:02.956 fused_ordering(58) 00:18:02.956 fused_ordering(59) 00:18:02.956 fused_ordering(60) 00:18:02.956 fused_ordering(61) 00:18:02.956 fused_ordering(62) 00:18:02.956 fused_ordering(63) 00:18:02.956 fused_ordering(64) 00:18:02.956 fused_ordering(65) 00:18:02.956 fused_ordering(66) 00:18:02.956 fused_ordering(67) 00:18:02.956 fused_ordering(68) 00:18:02.956 fused_ordering(69) 00:18:02.956 fused_ordering(70) 00:18:02.956 fused_ordering(71) 00:18:02.956 fused_ordering(72) 00:18:02.956 fused_ordering(73) 00:18:02.956 fused_ordering(74) 00:18:02.956 fused_ordering(75) 00:18:02.956 fused_ordering(76) 00:18:02.956 fused_ordering(77) 00:18:02.956 fused_ordering(78) 00:18:02.956 fused_ordering(79) 00:18:02.956 fused_ordering(80) 00:18:02.956 fused_ordering(81) 00:18:02.956 fused_ordering(82) 00:18:02.956 fused_ordering(83) 00:18:02.956 fused_ordering(84) 00:18:02.956 fused_ordering(85) 00:18:02.956 fused_ordering(86) 00:18:02.956 fused_ordering(87) 00:18:02.956 fused_ordering(88) 00:18:02.956 fused_ordering(89) 00:18:02.956 fused_ordering(90) 00:18:02.956 fused_ordering(91) 00:18:02.956 fused_ordering(92) 00:18:02.956 fused_ordering(93) 00:18:02.956 fused_ordering(94) 00:18:02.956 fused_ordering(95) 00:18:02.956 fused_ordering(96) 00:18:02.956 fused_ordering(97) 00:18:02.956 fused_ordering(98) 00:18:02.956 fused_ordering(99) 00:18:02.956 fused_ordering(100) 00:18:02.956 fused_ordering(101) 00:18:02.956 fused_ordering(102) 00:18:02.956 fused_ordering(103) 00:18:02.956 fused_ordering(104) 00:18:02.956 fused_ordering(105) 00:18:02.956 fused_ordering(106) 00:18:02.956 fused_ordering(107) 00:18:02.956 fused_ordering(108) 00:18:02.956 fused_ordering(109) 00:18:02.956 fused_ordering(110) 00:18:02.956 fused_ordering(111) 00:18:02.956 fused_ordering(112) 00:18:02.956 fused_ordering(113) 00:18:02.956 fused_ordering(114) 00:18:02.956 fused_ordering(115) 00:18:02.956 fused_ordering(116) 00:18:02.956 fused_ordering(117) 00:18:02.956 fused_ordering(118) 00:18:02.956 fused_ordering(119) 00:18:02.956 fused_ordering(120) 00:18:02.956 fused_ordering(121) 00:18:02.956 fused_ordering(122) 00:18:02.956 fused_ordering(123) 00:18:02.956 fused_ordering(124) 00:18:02.956 fused_ordering(125) 00:18:02.956 fused_ordering(126) 00:18:02.956 fused_ordering(127) 00:18:02.956 fused_ordering(128) 00:18:02.956 fused_ordering(129) 00:18:02.956 fused_ordering(130) 00:18:02.956 fused_ordering(131) 00:18:02.956 fused_ordering(132) 00:18:02.956 fused_ordering(133) 00:18:02.956 fused_ordering(134) 00:18:02.956 fused_ordering(135) 00:18:02.956 fused_ordering(136) 00:18:02.956 fused_ordering(137) 00:18:02.956 fused_ordering(138) 00:18:02.956 fused_ordering(139) 00:18:02.956 fused_ordering(140) 00:18:02.956 fused_ordering(141) 00:18:02.956 fused_ordering(142) 00:18:02.956 fused_ordering(143) 00:18:02.956 fused_ordering(144) 00:18:02.956 fused_ordering(145) 00:18:02.956 fused_ordering(146) 00:18:02.956 fused_ordering(147) 00:18:02.956 fused_ordering(148) 00:18:02.956 fused_ordering(149) 00:18:02.956 fused_ordering(150) 00:18:02.956 fused_ordering(151) 00:18:02.956 fused_ordering(152) 00:18:02.956 fused_ordering(153) 00:18:02.956 fused_ordering(154) 00:18:02.956 fused_ordering(155) 00:18:02.956 fused_ordering(156) 00:18:02.956 fused_ordering(157) 00:18:02.956 fused_ordering(158) 00:18:02.956 fused_ordering(159) 00:18:02.956 fused_ordering(160) 00:18:02.956 fused_ordering(161) 00:18:02.956 fused_ordering(162) 00:18:02.956 fused_ordering(163) 00:18:02.956 fused_ordering(164) 00:18:02.956 fused_ordering(165) 00:18:02.956 fused_ordering(166) 00:18:02.956 fused_ordering(167) 00:18:02.956 fused_ordering(168) 00:18:02.956 fused_ordering(169) 00:18:02.956 fused_ordering(170) 00:18:02.956 fused_ordering(171) 00:18:02.956 fused_ordering(172) 00:18:02.956 fused_ordering(173) 00:18:02.956 fused_ordering(174) 00:18:02.956 fused_ordering(175) 00:18:02.956 fused_ordering(176) 00:18:02.956 fused_ordering(177) 00:18:02.956 fused_ordering(178) 00:18:02.956 fused_ordering(179) 00:18:02.956 fused_ordering(180) 00:18:02.956 fused_ordering(181) 00:18:02.956 fused_ordering(182) 00:18:02.956 fused_ordering(183) 00:18:02.956 fused_ordering(184) 00:18:02.956 fused_ordering(185) 00:18:02.956 fused_ordering(186) 00:18:02.956 fused_ordering(187) 00:18:02.956 fused_ordering(188) 00:18:02.956 fused_ordering(189) 00:18:02.956 fused_ordering(190) 00:18:02.956 fused_ordering(191) 00:18:02.956 fused_ordering(192) 00:18:02.956 fused_ordering(193) 00:18:02.956 fused_ordering(194) 00:18:02.956 fused_ordering(195) 00:18:02.956 fused_ordering(196) 00:18:02.956 fused_ordering(197) 00:18:02.956 fused_ordering(198) 00:18:02.956 fused_ordering(199) 00:18:02.956 fused_ordering(200) 00:18:02.956 fused_ordering(201) 00:18:02.956 fused_ordering(202) 00:18:02.956 fused_ordering(203) 00:18:02.956 fused_ordering(204) 00:18:02.956 fused_ordering(205) 00:18:02.956 fused_ordering(206) 00:18:02.956 fused_ordering(207) 00:18:02.956 fused_ordering(208) 00:18:02.957 fused_ordering(209) 00:18:02.957 fused_ordering(210) 00:18:02.957 fused_ordering(211) 00:18:02.957 fused_ordering(212) 00:18:02.957 fused_ordering(213) 00:18:02.957 fused_ordering(214) 00:18:02.957 fused_ordering(215) 00:18:02.957 fused_ordering(216) 00:18:02.957 fused_ordering(217) 00:18:02.957 fused_ordering(218) 00:18:02.957 fused_ordering(219) 00:18:02.957 fused_ordering(220) 00:18:02.957 fused_ordering(221) 00:18:02.957 fused_ordering(222) 00:18:02.957 fused_ordering(223) 00:18:02.957 fused_ordering(224) 00:18:02.957 fused_ordering(225) 00:18:02.957 fused_ordering(226) 00:18:02.957 fused_ordering(227) 00:18:02.957 fused_ordering(228) 00:18:02.957 fused_ordering(229) 00:18:02.957 fused_ordering(230) 00:18:02.957 fused_ordering(231) 00:18:02.957 fused_ordering(232) 00:18:02.957 fused_ordering(233) 00:18:02.957 fused_ordering(234) 00:18:02.957 fused_ordering(235) 00:18:02.957 fused_ordering(236) 00:18:02.957 fused_ordering(237) 00:18:02.957 fused_ordering(238) 00:18:02.957 fused_ordering(239) 00:18:02.957 fused_ordering(240) 00:18:02.957 fused_ordering(241) 00:18:02.957 fused_ordering(242) 00:18:02.957 fused_ordering(243) 00:18:02.957 fused_ordering(244) 00:18:02.957 fused_ordering(245) 00:18:02.957 fused_ordering(246) 00:18:02.957 fused_ordering(247) 00:18:02.957 fused_ordering(248) 00:18:02.957 fused_ordering(249) 00:18:02.957 fused_ordering(250) 00:18:02.957 fused_ordering(251) 00:18:02.957 fused_ordering(252) 00:18:02.957 fused_ordering(253) 00:18:02.957 fused_ordering(254) 00:18:02.957 fused_ordering(255) 00:18:02.957 fused_ordering(256) 00:18:02.957 fused_ordering(257) 00:18:02.957 fused_ordering(258) 00:18:02.957 fused_ordering(259) 00:18:02.957 fused_ordering(260) 00:18:02.957 fused_ordering(261) 00:18:02.957 fused_ordering(262) 00:18:02.957 fused_ordering(263) 00:18:02.957 fused_ordering(264) 00:18:02.957 fused_ordering(265) 00:18:02.957 fused_ordering(266) 00:18:02.957 fused_ordering(267) 00:18:02.957 fused_ordering(268) 00:18:02.957 fused_ordering(269) 00:18:02.957 fused_ordering(270) 00:18:02.957 fused_ordering(271) 00:18:02.957 fused_ordering(272) 00:18:02.957 fused_ordering(273) 00:18:02.957 fused_ordering(274) 00:18:02.957 fused_ordering(275) 00:18:02.957 fused_ordering(276) 00:18:02.957 fused_ordering(277) 00:18:02.957 fused_ordering(278) 00:18:02.957 fused_ordering(279) 00:18:02.957 fused_ordering(280) 00:18:02.957 fused_ordering(281) 00:18:02.957 fused_ordering(282) 00:18:02.957 fused_ordering(283) 00:18:02.957 fused_ordering(284) 00:18:02.957 fused_ordering(285) 00:18:02.957 fused_ordering(286) 00:18:02.957 fused_ordering(287) 00:18:02.957 fused_ordering(288) 00:18:02.957 fused_ordering(289) 00:18:02.957 fused_ordering(290) 00:18:02.957 fused_ordering(291) 00:18:02.957 fused_ordering(292) 00:18:02.957 fused_ordering(293) 00:18:02.957 fused_ordering(294) 00:18:02.957 fused_ordering(295) 00:18:02.957 fused_ordering(296) 00:18:02.957 fused_ordering(297) 00:18:02.957 fused_ordering(298) 00:18:02.957 fused_ordering(299) 00:18:02.957 fused_ordering(300) 00:18:02.957 fused_ordering(301) 00:18:02.957 fused_ordering(302) 00:18:02.957 fused_ordering(303) 00:18:02.957 fused_ordering(304) 00:18:02.957 fused_ordering(305) 00:18:02.957 fused_ordering(306) 00:18:02.957 fused_ordering(307) 00:18:02.957 fused_ordering(308) 00:18:02.957 fused_ordering(309) 00:18:02.957 fused_ordering(310) 00:18:02.957 fused_ordering(311) 00:18:02.957 fused_ordering(312) 00:18:02.957 fused_ordering(313) 00:18:02.957 fused_ordering(314) 00:18:02.957 fused_ordering(315) 00:18:02.957 fused_ordering(316) 00:18:02.957 fused_ordering(317) 00:18:02.957 fused_ordering(318) 00:18:02.957 fused_ordering(319) 00:18:02.957 fused_ordering(320) 00:18:02.957 fused_ordering(321) 00:18:02.957 fused_ordering(322) 00:18:02.957 fused_ordering(323) 00:18:02.957 fused_ordering(324) 00:18:02.957 fused_ordering(325) 00:18:02.957 fused_ordering(326) 00:18:02.957 fused_ordering(327) 00:18:02.957 fused_ordering(328) 00:18:02.957 fused_ordering(329) 00:18:02.957 fused_ordering(330) 00:18:02.957 fused_ordering(331) 00:18:02.957 fused_ordering(332) 00:18:02.957 fused_ordering(333) 00:18:02.957 fused_ordering(334) 00:18:02.957 fused_ordering(335) 00:18:02.957 fused_ordering(336) 00:18:02.957 fused_ordering(337) 00:18:02.957 fused_ordering(338) 00:18:02.957 fused_ordering(339) 00:18:02.957 fused_ordering(340) 00:18:02.957 fused_ordering(341) 00:18:02.957 fused_ordering(342) 00:18:02.957 fused_ordering(343) 00:18:02.957 fused_ordering(344) 00:18:02.957 fused_ordering(345) 00:18:02.957 fused_ordering(346) 00:18:02.957 fused_ordering(347) 00:18:02.957 fused_ordering(348) 00:18:02.957 fused_ordering(349) 00:18:02.957 fused_ordering(350) 00:18:02.957 fused_ordering(351) 00:18:02.957 fused_ordering(352) 00:18:02.957 fused_ordering(353) 00:18:02.957 fused_ordering(354) 00:18:02.957 fused_ordering(355) 00:18:02.957 fused_ordering(356) 00:18:02.957 fused_ordering(357) 00:18:02.957 fused_ordering(358) 00:18:02.957 fused_ordering(359) 00:18:02.957 fused_ordering(360) 00:18:02.957 fused_ordering(361) 00:18:02.957 fused_ordering(362) 00:18:02.957 fused_ordering(363) 00:18:02.957 fused_ordering(364) 00:18:02.957 fused_ordering(365) 00:18:02.957 fused_ordering(366) 00:18:02.957 fused_ordering(367) 00:18:02.957 fused_ordering(368) 00:18:02.957 fused_ordering(369) 00:18:02.957 fused_ordering(370) 00:18:02.957 fused_ordering(371) 00:18:02.957 fused_ordering(372) 00:18:02.957 fused_ordering(373) 00:18:02.957 fused_ordering(374) 00:18:02.957 fused_ordering(375) 00:18:02.957 fused_ordering(376) 00:18:02.957 fused_ordering(377) 00:18:02.957 fused_ordering(378) 00:18:02.957 fused_ordering(379) 00:18:02.957 fused_ordering(380) 00:18:02.957 fused_ordering(381) 00:18:02.957 fused_ordering(382) 00:18:02.957 fused_ordering(383) 00:18:02.957 fused_ordering(384) 00:18:02.957 fused_ordering(385) 00:18:02.957 fused_ordering(386) 00:18:02.957 fused_ordering(387) 00:18:02.957 fused_ordering(388) 00:18:02.957 fused_ordering(389) 00:18:02.957 fused_ordering(390) 00:18:02.957 fused_ordering(391) 00:18:02.957 fused_ordering(392) 00:18:02.957 fused_ordering(393) 00:18:02.957 fused_ordering(394) 00:18:02.957 fused_ordering(395) 00:18:02.957 fused_ordering(396) 00:18:02.957 fused_ordering(397) 00:18:02.957 fused_ordering(398) 00:18:02.957 fused_ordering(399) 00:18:02.957 fused_ordering(400) 00:18:02.957 fused_ordering(401) 00:18:02.957 fused_ordering(402) 00:18:02.957 fused_ordering(403) 00:18:02.957 fused_ordering(404) 00:18:02.957 fused_ordering(405) 00:18:02.957 fused_ordering(406) 00:18:02.957 fused_ordering(407) 00:18:02.957 fused_ordering(408) 00:18:02.957 fused_ordering(409) 00:18:02.957 fused_ordering(410) 00:18:03.217 fused_ordering(411) 00:18:03.217 fused_ordering(412) 00:18:03.217 fused_ordering(413) 00:18:03.217 fused_ordering(414) 00:18:03.217 fused_ordering(415) 00:18:03.217 fused_ordering(416) 00:18:03.217 fused_ordering(417) 00:18:03.217 fused_ordering(418) 00:18:03.217 fused_ordering(419) 00:18:03.217 fused_ordering(420) 00:18:03.217 fused_ordering(421) 00:18:03.217 fused_ordering(422) 00:18:03.217 fused_ordering(423) 00:18:03.217 fused_ordering(424) 00:18:03.217 fused_ordering(425) 00:18:03.217 fused_ordering(426) 00:18:03.217 fused_ordering(427) 00:18:03.217 fused_ordering(428) 00:18:03.217 fused_ordering(429) 00:18:03.217 fused_ordering(430) 00:18:03.217 fused_ordering(431) 00:18:03.217 fused_ordering(432) 00:18:03.217 fused_ordering(433) 00:18:03.217 fused_ordering(434) 00:18:03.217 fused_ordering(435) 00:18:03.217 fused_ordering(436) 00:18:03.217 fused_ordering(437) 00:18:03.217 fused_ordering(438) 00:18:03.217 fused_ordering(439) 00:18:03.217 fused_ordering(440) 00:18:03.217 fused_ordering(441) 00:18:03.217 fused_ordering(442) 00:18:03.217 fused_ordering(443) 00:18:03.217 fused_ordering(444) 00:18:03.217 fused_ordering(445) 00:18:03.217 fused_ordering(446) 00:18:03.217 fused_ordering(447) 00:18:03.217 fused_ordering(448) 00:18:03.217 fused_ordering(449) 00:18:03.217 fused_ordering(450) 00:18:03.217 fused_ordering(451) 00:18:03.217 fused_ordering(452) 00:18:03.217 fused_ordering(453) 00:18:03.217 fused_ordering(454) 00:18:03.217 fused_ordering(455) 00:18:03.217 fused_ordering(456) 00:18:03.217 fused_ordering(457) 00:18:03.217 fused_ordering(458) 00:18:03.217 fused_ordering(459) 00:18:03.217 fused_ordering(460) 00:18:03.217 fused_ordering(461) 00:18:03.217 fused_ordering(462) 00:18:03.217 fused_ordering(463) 00:18:03.217 fused_ordering(464) 00:18:03.217 fused_ordering(465) 00:18:03.217 fused_ordering(466) 00:18:03.217 fused_ordering(467) 00:18:03.217 fused_ordering(468) 00:18:03.217 fused_ordering(469) 00:18:03.217 fused_ordering(470) 00:18:03.217 fused_ordering(471) 00:18:03.217 fused_ordering(472) 00:18:03.217 fused_ordering(473) 00:18:03.217 fused_ordering(474) 00:18:03.217 fused_ordering(475) 00:18:03.217 fused_ordering(476) 00:18:03.217 fused_ordering(477) 00:18:03.217 fused_ordering(478) 00:18:03.217 fused_ordering(479) 00:18:03.217 fused_ordering(480) 00:18:03.217 fused_ordering(481) 00:18:03.217 fused_ordering(482) 00:18:03.217 fused_ordering(483) 00:18:03.217 fused_ordering(484) 00:18:03.217 fused_ordering(485) 00:18:03.217 fused_ordering(486) 00:18:03.217 fused_ordering(487) 00:18:03.217 fused_ordering(488) 00:18:03.217 fused_ordering(489) 00:18:03.217 fused_ordering(490) 00:18:03.217 fused_ordering(491) 00:18:03.217 fused_ordering(492) 00:18:03.217 fused_ordering(493) 00:18:03.217 fused_ordering(494) 00:18:03.217 fused_ordering(495) 00:18:03.217 fused_ordering(496) 00:18:03.217 fused_ordering(497) 00:18:03.217 fused_ordering(498) 00:18:03.217 fused_ordering(499) 00:18:03.217 fused_ordering(500) 00:18:03.217 fused_ordering(501) 00:18:03.217 fused_ordering(502) 00:18:03.217 fused_ordering(503) 00:18:03.217 fused_ordering(504) 00:18:03.217 fused_ordering(505) 00:18:03.217 fused_ordering(506) 00:18:03.217 fused_ordering(507) 00:18:03.217 fused_ordering(508) 00:18:03.217 fused_ordering(509) 00:18:03.217 fused_ordering(510) 00:18:03.217 fused_ordering(511) 00:18:03.217 fused_ordering(512) 00:18:03.217 fused_ordering(513) 00:18:03.217 fused_ordering(514) 00:18:03.217 fused_ordering(515) 00:18:03.217 fused_ordering(516) 00:18:03.217 fused_ordering(517) 00:18:03.217 fused_ordering(518) 00:18:03.217 fused_ordering(519) 00:18:03.217 fused_ordering(520) 00:18:03.217 fused_ordering(521) 00:18:03.217 fused_ordering(522) 00:18:03.217 fused_ordering(523) 00:18:03.217 fused_ordering(524) 00:18:03.217 fused_ordering(525) 00:18:03.217 fused_ordering(526) 00:18:03.217 fused_ordering(527) 00:18:03.217 fused_ordering(528) 00:18:03.217 fused_ordering(529) 00:18:03.217 fused_ordering(530) 00:18:03.217 fused_ordering(531) 00:18:03.217 fused_ordering(532) 00:18:03.217 fused_ordering(533) 00:18:03.217 fused_ordering(534) 00:18:03.217 fused_ordering(535) 00:18:03.217 fused_ordering(536) 00:18:03.217 fused_ordering(537) 00:18:03.217 fused_ordering(538) 00:18:03.217 fused_ordering(539) 00:18:03.217 fused_ordering(540) 00:18:03.217 fused_ordering(541) 00:18:03.217 fused_ordering(542) 00:18:03.217 fused_ordering(543) 00:18:03.217 fused_ordering(544) 00:18:03.217 fused_ordering(545) 00:18:03.217 fused_ordering(546) 00:18:03.217 fused_ordering(547) 00:18:03.217 fused_ordering(548) 00:18:03.217 fused_ordering(549) 00:18:03.217 fused_ordering(550) 00:18:03.217 fused_ordering(551) 00:18:03.217 fused_ordering(552) 00:18:03.217 fused_ordering(553) 00:18:03.217 fused_ordering(554) 00:18:03.217 fused_ordering(555) 00:18:03.217 fused_ordering(556) 00:18:03.217 fused_ordering(557) 00:18:03.217 fused_ordering(558) 00:18:03.217 fused_ordering(559) 00:18:03.217 fused_ordering(560) 00:18:03.217 fused_ordering(561) 00:18:03.217 fused_ordering(562) 00:18:03.217 fused_ordering(563) 00:18:03.217 fused_ordering(564) 00:18:03.217 fused_ordering(565) 00:18:03.217 fused_ordering(566) 00:18:03.217 fused_ordering(567) 00:18:03.217 fused_ordering(568) 00:18:03.217 fused_ordering(569) 00:18:03.217 fused_ordering(570) 00:18:03.217 fused_ordering(571) 00:18:03.217 fused_ordering(572) 00:18:03.217 fused_ordering(573) 00:18:03.217 fused_ordering(574) 00:18:03.217 fused_ordering(575) 00:18:03.217 fused_ordering(576) 00:18:03.217 fused_ordering(577) 00:18:03.217 fused_ordering(578) 00:18:03.217 fused_ordering(579) 00:18:03.217 fused_ordering(580) 00:18:03.217 fused_ordering(581) 00:18:03.217 fused_ordering(582) 00:18:03.217 fused_ordering(583) 00:18:03.217 fused_ordering(584) 00:18:03.217 fused_ordering(585) 00:18:03.217 fused_ordering(586) 00:18:03.217 fused_ordering(587) 00:18:03.217 fused_ordering(588) 00:18:03.217 fused_ordering(589) 00:18:03.217 fused_ordering(590) 00:18:03.217 fused_ordering(591) 00:18:03.217 fused_ordering(592) 00:18:03.217 fused_ordering(593) 00:18:03.217 fused_ordering(594) 00:18:03.217 fused_ordering(595) 00:18:03.217 fused_ordering(596) 00:18:03.217 fused_ordering(597) 00:18:03.217 fused_ordering(598) 00:18:03.217 fused_ordering(599) 00:18:03.217 fused_ordering(600) 00:18:03.217 fused_ordering(601) 00:18:03.217 fused_ordering(602) 00:18:03.217 fused_ordering(603) 00:18:03.217 fused_ordering(604) 00:18:03.217 fused_ordering(605) 00:18:03.217 fused_ordering(606) 00:18:03.217 fused_ordering(607) 00:18:03.217 fused_ordering(608) 00:18:03.217 fused_ordering(609) 00:18:03.217 fused_ordering(610) 00:18:03.217 fused_ordering(611) 00:18:03.217 fused_ordering(612) 00:18:03.217 fused_ordering(613) 00:18:03.217 fused_ordering(614) 00:18:03.217 fused_ordering(615) 00:18:03.217 fused_ordering(616) 00:18:03.217 fused_ordering(617) 00:18:03.217 fused_ordering(618) 00:18:03.217 fused_ordering(619) 00:18:03.217 fused_ordering(620) 00:18:03.217 fused_ordering(621) 00:18:03.217 fused_ordering(622) 00:18:03.217 fused_ordering(623) 00:18:03.217 fused_ordering(624) 00:18:03.217 fused_ordering(625) 00:18:03.217 fused_ordering(626) 00:18:03.217 fused_ordering(627) 00:18:03.217 fused_ordering(628) 00:18:03.217 fused_ordering(629) 00:18:03.217 fused_ordering(630) 00:18:03.217 fused_ordering(631) 00:18:03.217 fused_ordering(632) 00:18:03.217 fused_ordering(633) 00:18:03.217 fused_ordering(634) 00:18:03.217 fused_ordering(635) 00:18:03.217 fused_ordering(636) 00:18:03.217 fused_ordering(637) 00:18:03.217 fused_ordering(638) 00:18:03.217 fused_ordering(639) 00:18:03.217 fused_ordering(640) 00:18:03.217 fused_ordering(641) 00:18:03.217 fused_ordering(642) 00:18:03.217 fused_ordering(643) 00:18:03.217 fused_ordering(644) 00:18:03.217 fused_ordering(645) 00:18:03.217 fused_ordering(646) 00:18:03.217 fused_ordering(647) 00:18:03.217 fused_ordering(648) 00:18:03.217 fused_ordering(649) 00:18:03.217 fused_ordering(650) 00:18:03.217 fused_ordering(651) 00:18:03.217 fused_ordering(652) 00:18:03.217 fused_ordering(653) 00:18:03.217 fused_ordering(654) 00:18:03.217 fused_ordering(655) 00:18:03.217 fused_ordering(656) 00:18:03.217 fused_ordering(657) 00:18:03.217 fused_ordering(658) 00:18:03.217 fused_ordering(659) 00:18:03.217 fused_ordering(660) 00:18:03.217 fused_ordering(661) 00:18:03.217 fused_ordering(662) 00:18:03.217 fused_ordering(663) 00:18:03.217 fused_ordering(664) 00:18:03.217 fused_ordering(665) 00:18:03.217 fused_ordering(666) 00:18:03.217 fused_ordering(667) 00:18:03.217 fused_ordering(668) 00:18:03.217 fused_ordering(669) 00:18:03.217 fused_ordering(670) 00:18:03.217 fused_ordering(671) 00:18:03.217 fused_ordering(672) 00:18:03.217 fused_ordering(673) 00:18:03.217 fused_ordering(674) 00:18:03.217 fused_ordering(675) 00:18:03.217 fused_ordering(676) 00:18:03.217 fused_ordering(677) 00:18:03.217 fused_ordering(678) 00:18:03.217 fused_ordering(679) 00:18:03.217 fused_ordering(680) 00:18:03.217 fused_ordering(681) 00:18:03.217 fused_ordering(682) 00:18:03.217 fused_ordering(683) 00:18:03.217 fused_ordering(684) 00:18:03.217 fused_ordering(685) 00:18:03.217 fused_ordering(686) 00:18:03.217 fused_ordering(687) 00:18:03.217 fused_ordering(688) 00:18:03.217 fused_ordering(689) 00:18:03.217 fused_ordering(690) 00:18:03.217 fused_ordering(691) 00:18:03.217 fused_ordering(692) 00:18:03.217 fused_ordering(693) 00:18:03.217 fused_ordering(694) 00:18:03.217 fused_ordering(695) 00:18:03.217 fused_ordering(696) 00:18:03.217 fused_ordering(697) 00:18:03.217 fused_ordering(698) 00:18:03.217 fused_ordering(699) 00:18:03.217 fused_ordering(700) 00:18:03.217 fused_ordering(701) 00:18:03.217 fused_ordering(702) 00:18:03.217 fused_ordering(703) 00:18:03.218 fused_ordering(704) 00:18:03.218 fused_ordering(705) 00:18:03.218 fused_ordering(706) 00:18:03.218 fused_ordering(707) 00:18:03.218 fused_ordering(708) 00:18:03.218 fused_ordering(709) 00:18:03.218 fused_ordering(710) 00:18:03.218 fused_ordering(711) 00:18:03.218 fused_ordering(712) 00:18:03.218 fused_ordering(713) 00:18:03.218 fused_ordering(714) 00:18:03.218 fused_ordering(715) 00:18:03.218 fused_ordering(716) 00:18:03.218 fused_ordering(717) 00:18:03.218 fused_ordering(718) 00:18:03.218 fused_ordering(719) 00:18:03.218 fused_ordering(720) 00:18:03.218 fused_ordering(721) 00:18:03.218 fused_ordering(722) 00:18:03.218 fused_ordering(723) 00:18:03.218 fused_ordering(724) 00:18:03.218 fused_ordering(725) 00:18:03.218 fused_ordering(726) 00:18:03.218 fused_ordering(727) 00:18:03.218 fused_ordering(728) 00:18:03.218 fused_ordering(729) 00:18:03.218 fused_ordering(730) 00:18:03.218 fused_ordering(731) 00:18:03.218 fused_ordering(732) 00:18:03.218 fused_ordering(733) 00:18:03.218 fused_ordering(734) 00:18:03.218 fused_ordering(735) 00:18:03.218 fused_ordering(736) 00:18:03.218 fused_ordering(737) 00:18:03.218 fused_ordering(738) 00:18:03.218 fused_ordering(739) 00:18:03.218 fused_ordering(740) 00:18:03.218 fused_ordering(741) 00:18:03.218 fused_ordering(742) 00:18:03.218 fused_ordering(743) 00:18:03.218 fused_ordering(744) 00:18:03.218 fused_ordering(745) 00:18:03.218 fused_ordering(746) 00:18:03.218 fused_ordering(747) 00:18:03.218 fused_ordering(748) 00:18:03.218 fused_ordering(749) 00:18:03.218 fused_ordering(750) 00:18:03.218 fused_ordering(751) 00:18:03.218 fused_ordering(752) 00:18:03.218 fused_ordering(753) 00:18:03.218 fused_ordering(754) 00:18:03.218 fused_ordering(755) 00:18:03.218 fused_ordering(756) 00:18:03.218 fused_ordering(757) 00:18:03.218 fused_ordering(758) 00:18:03.218 fused_ordering(759) 00:18:03.218 fused_ordering(760) 00:18:03.218 fused_ordering(761) 00:18:03.218 fused_ordering(762) 00:18:03.218 fused_ordering(763) 00:18:03.218 fused_ordering(764) 00:18:03.218 fused_ordering(765) 00:18:03.218 fused_ordering(766) 00:18:03.218 fused_ordering(767) 00:18:03.218 fused_ordering(768) 00:18:03.218 fused_ordering(769) 00:18:03.218 fused_ordering(770) 00:18:03.218 fused_ordering(771) 00:18:03.218 fused_ordering(772) 00:18:03.218 fused_ordering(773) 00:18:03.218 fused_ordering(774) 00:18:03.218 fused_ordering(775) 00:18:03.218 fused_ordering(776) 00:18:03.218 fused_ordering(777) 00:18:03.218 fused_ordering(778) 00:18:03.218 fused_ordering(779) 00:18:03.218 fused_ordering(780) 00:18:03.218 fused_ordering(781) 00:18:03.218 fused_ordering(782) 00:18:03.218 fused_ordering(783) 00:18:03.218 fused_ordering(784) 00:18:03.218 fused_ordering(785) 00:18:03.218 fused_ordering(786) 00:18:03.218 fused_ordering(787) 00:18:03.218 fused_ordering(788) 00:18:03.218 fused_ordering(789) 00:18:03.218 fused_ordering(790) 00:18:03.218 fused_ordering(791) 00:18:03.218 fused_ordering(792) 00:18:03.218 fused_ordering(793) 00:18:03.218 fused_ordering(794) 00:18:03.218 fused_ordering(795) 00:18:03.218 fused_ordering(796) 00:18:03.218 fused_ordering(797) 00:18:03.218 fused_ordering(798) 00:18:03.218 fused_ordering(799) 00:18:03.218 fused_ordering(800) 00:18:03.218 fused_ordering(801) 00:18:03.218 fused_ordering(802) 00:18:03.218 fused_ordering(803) 00:18:03.218 fused_ordering(804) 00:18:03.218 fused_ordering(805) 00:18:03.218 fused_ordering(806) 00:18:03.218 fused_ordering(807) 00:18:03.218 fused_ordering(808) 00:18:03.218 fused_ordering(809) 00:18:03.218 fused_ordering(810) 00:18:03.218 fused_ordering(811) 00:18:03.218 fused_ordering(812) 00:18:03.218 fused_ordering(813) 00:18:03.218 fused_ordering(814) 00:18:03.218 fused_ordering(815) 00:18:03.218 fused_ordering(816) 00:18:03.218 fused_ordering(817) 00:18:03.218 fused_ordering(818) 00:18:03.218 fused_ordering(819) 00:18:03.218 fused_ordering(820) 00:18:03.477 fused_ordering(821) 00:18:03.477 fused_ordering(822) 00:18:03.477 fused_ordering(823) 00:18:03.477 fused_ordering(824) 00:18:03.477 fused_ordering(825) 00:18:03.477 fused_ordering(826) 00:18:03.477 fused_ordering(827) 00:18:03.477 fused_ordering(828) 00:18:03.477 fused_ordering(829) 00:18:03.477 fused_ordering(830) 00:18:03.477 fused_ordering(831) 00:18:03.477 fused_ordering(832) 00:18:03.477 fused_ordering(833) 00:18:03.477 fused_ordering(834) 00:18:03.477 fused_ordering(835) 00:18:03.477 fused_ordering(836) 00:18:03.477 fused_ordering(837) 00:18:03.477 fused_ordering(838) 00:18:03.477 fused_ordering(839) 00:18:03.477 fused_ordering(840) 00:18:03.477 fused_ordering(841) 00:18:03.477 fused_ordering(842) 00:18:03.477 fused_ordering(843) 00:18:03.477 fused_ordering(844) 00:18:03.477 fused_ordering(845) 00:18:03.477 fused_ordering(846) 00:18:03.477 fused_ordering(847) 00:18:03.477 fused_ordering(848) 00:18:03.477 fused_ordering(849) 00:18:03.477 fused_ordering(850) 00:18:03.477 fused_ordering(851) 00:18:03.477 fused_ordering(852) 00:18:03.477 fused_ordering(853) 00:18:03.477 fused_ordering(854) 00:18:03.477 fused_ordering(855) 00:18:03.477 fused_ordering(856) 00:18:03.477 fused_ordering(857) 00:18:03.477 fused_ordering(858) 00:18:03.477 fused_ordering(859) 00:18:03.477 fused_ordering(860) 00:18:03.477 fused_ordering(861) 00:18:03.477 fused_ordering(862) 00:18:03.477 fused_ordering(863) 00:18:03.477 fused_ordering(864) 00:18:03.477 fused_ordering(865) 00:18:03.477 fused_ordering(866) 00:18:03.477 fused_ordering(867) 00:18:03.477 fused_ordering(868) 00:18:03.477 fused_ordering(869) 00:18:03.477 fused_ordering(870) 00:18:03.477 fused_ordering(871) 00:18:03.477 fused_ordering(872) 00:18:03.477 fused_ordering(873) 00:18:03.477 fused_ordering(874) 00:18:03.477 fused_ordering(875) 00:18:03.477 fused_ordering(876) 00:18:03.477 fused_ordering(877) 00:18:03.477 fused_ordering(878) 00:18:03.477 fused_ordering(879) 00:18:03.477 fused_ordering(880) 00:18:03.477 fused_ordering(881) 00:18:03.477 fused_ordering(882) 00:18:03.477 fused_ordering(883) 00:18:03.477 fused_ordering(884) 00:18:03.477 fused_ordering(885) 00:18:03.477 fused_ordering(886) 00:18:03.477 fused_ordering(887) 00:18:03.477 fused_ordering(888) 00:18:03.477 fused_ordering(889) 00:18:03.477 fused_ordering(890) 00:18:03.477 fused_ordering(891) 00:18:03.477 fused_ordering(892) 00:18:03.477 fused_ordering(893) 00:18:03.477 fused_ordering(894) 00:18:03.477 fused_ordering(895) 00:18:03.477 fused_ordering(896) 00:18:03.477 fused_ordering(897) 00:18:03.477 fused_ordering(898) 00:18:03.477 fused_ordering(899) 00:18:03.477 fused_ordering(900) 00:18:03.477 fused_ordering(901) 00:18:03.477 fused_ordering(902) 00:18:03.477 fused_ordering(903) 00:18:03.477 fused_ordering(904) 00:18:03.477 fused_ordering(905) 00:18:03.477 fused_ordering(906) 00:18:03.477 fused_ordering(907) 00:18:03.477 fused_ordering(908) 00:18:03.477 fused_ordering(909) 00:18:03.477 fused_ordering(910) 00:18:03.477 fused_ordering(911) 00:18:03.477 fused_ordering(912) 00:18:03.477 fused_ordering(913) 00:18:03.477 fused_ordering(914) 00:18:03.477 fused_ordering(915) 00:18:03.477 fused_ordering(916) 00:18:03.477 fused_ordering(917) 00:18:03.477 fused_ordering(918) 00:18:03.477 fused_ordering(919) 00:18:03.477 fused_ordering(920) 00:18:03.477 fused_ordering(921) 00:18:03.477 fused_ordering(922) 00:18:03.477 fused_ordering(923) 00:18:03.477 fused_ordering(924) 00:18:03.477 fused_ordering(925) 00:18:03.477 fused_ordering(926) 00:18:03.477 fused_ordering(927) 00:18:03.477 fused_ordering(928) 00:18:03.477 fused_ordering(929) 00:18:03.477 fused_ordering(930) 00:18:03.477 fused_ordering(931) 00:18:03.477 fused_ordering(932) 00:18:03.477 fused_ordering(933) 00:18:03.477 fused_ordering(934) 00:18:03.477 fused_ordering(935) 00:18:03.477 fused_ordering(936) 00:18:03.477 fused_ordering(937) 00:18:03.477 fused_ordering(938) 00:18:03.477 fused_ordering(939) 00:18:03.477 fused_ordering(940) 00:18:03.477 fused_ordering(941) 00:18:03.477 fused_ordering(942) 00:18:03.477 fused_ordering(943) 00:18:03.477 fused_ordering(944) 00:18:03.477 fused_ordering(945) 00:18:03.477 fused_ordering(946) 00:18:03.477 fused_ordering(947) 00:18:03.477 fused_ordering(948) 00:18:03.477 fused_ordering(949) 00:18:03.477 fused_ordering(950) 00:18:03.477 fused_ordering(951) 00:18:03.477 fused_ordering(952) 00:18:03.477 fused_ordering(953) 00:18:03.477 fused_ordering(954) 00:18:03.477 fused_ordering(955) 00:18:03.477 fused_ordering(956) 00:18:03.477 fused_ordering(957) 00:18:03.477 fused_ordering(958) 00:18:03.477 fused_ordering(959) 00:18:03.477 fused_ordering(960) 00:18:03.477 fused_ordering(961) 00:18:03.477 fused_ordering(962) 00:18:03.477 fused_ordering(963) 00:18:03.477 fused_ordering(964) 00:18:03.477 fused_ordering(965) 00:18:03.477 fused_ordering(966) 00:18:03.477 fused_ordering(967) 00:18:03.477 fused_ordering(968) 00:18:03.477 fused_ordering(969) 00:18:03.477 fused_ordering(970) 00:18:03.477 fused_ordering(971) 00:18:03.477 fused_ordering(972) 00:18:03.477 fused_ordering(973) 00:18:03.477 fused_ordering(974) 00:18:03.477 fused_ordering(975) 00:18:03.477 fused_ordering(976) 00:18:03.477 fused_ordering(977) 00:18:03.477 fused_ordering(978) 00:18:03.477 fused_ordering(979) 00:18:03.477 fused_ordering(980) 00:18:03.477 fused_ordering(981) 00:18:03.477 fused_ordering(982) 00:18:03.477 fused_ordering(983) 00:18:03.477 fused_ordering(984) 00:18:03.477 fused_ordering(985) 00:18:03.477 fused_ordering(986) 00:18:03.477 fused_ordering(987) 00:18:03.477 fused_ordering(988) 00:18:03.477 fused_ordering(989) 00:18:03.477 fused_ordering(990) 00:18:03.477 fused_ordering(991) 00:18:03.477 fused_ordering(992) 00:18:03.477 fused_ordering(993) 00:18:03.477 fused_ordering(994) 00:18:03.477 fused_ordering(995) 00:18:03.477 fused_ordering(996) 00:18:03.477 fused_ordering(997) 00:18:03.477 fused_ordering(998) 00:18:03.477 fused_ordering(999) 00:18:03.477 fused_ordering(1000) 00:18:03.477 fused_ordering(1001) 00:18:03.477 fused_ordering(1002) 00:18:03.477 fused_ordering(1003) 00:18:03.477 fused_ordering(1004) 00:18:03.478 fused_ordering(1005) 00:18:03.478 fused_ordering(1006) 00:18:03.478 fused_ordering(1007) 00:18:03.478 fused_ordering(1008) 00:18:03.478 fused_ordering(1009) 00:18:03.478 fused_ordering(1010) 00:18:03.478 fused_ordering(1011) 00:18:03.478 fused_ordering(1012) 00:18:03.478 fused_ordering(1013) 00:18:03.478 fused_ordering(1014) 00:18:03.478 fused_ordering(1015) 00:18:03.478 fused_ordering(1016) 00:18:03.478 fused_ordering(1017) 00:18:03.478 fused_ordering(1018) 00:18:03.478 fused_ordering(1019) 00:18:03.478 fused_ordering(1020) 00:18:03.478 fused_ordering(1021) 00:18:03.478 fused_ordering(1022) 00:18:03.478 fused_ordering(1023) 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:03.478 rmmod nvme_rdma 00:18:03.478 rmmod nvme_fabrics 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 3034629 ']' 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 3034629 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3034629 ']' 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3034629 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3034629 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3034629' 00:18:03.478 killing process with pid 3034629 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3034629 00:18:03.478 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3034629 00:18:03.738 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:03.738 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:18:03.738 00:18:03.738 real 0m8.372s 00:18:03.738 user 0m4.021s 00:18:03.738 sys 0m5.551s 00:18:03.738 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:03.738 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:03.738 ************************************ 00:18:03.738 END TEST nvmf_fused_ordering 00:18:03.738 ************************************ 00:18:03.738 21:47:35 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:18:03.738 21:47:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:03.738 21:47:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:03.738 21:47:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:03.738 ************************************ 00:18:03.738 START TEST nvmf_ns_masking 00:18:03.738 ************************************ 00:18:03.738 21:47:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:18:03.999 * Looking for test storage... 00:18:03.999 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:03.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.999 --rc genhtml_branch_coverage=1 00:18:03.999 --rc genhtml_function_coverage=1 00:18:03.999 --rc genhtml_legend=1 00:18:03.999 --rc geninfo_all_blocks=1 00:18:03.999 --rc geninfo_unexecuted_blocks=1 00:18:03.999 00:18:03.999 ' 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:03.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.999 --rc genhtml_branch_coverage=1 00:18:03.999 --rc genhtml_function_coverage=1 00:18:03.999 --rc genhtml_legend=1 00:18:03.999 --rc geninfo_all_blocks=1 00:18:03.999 --rc geninfo_unexecuted_blocks=1 00:18:03.999 00:18:03.999 ' 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:03.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.999 --rc genhtml_branch_coverage=1 00:18:03.999 --rc genhtml_function_coverage=1 00:18:03.999 --rc genhtml_legend=1 00:18:03.999 --rc geninfo_all_blocks=1 00:18:03.999 --rc geninfo_unexecuted_blocks=1 00:18:03.999 00:18:03.999 ' 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:03.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.999 --rc genhtml_branch_coverage=1 00:18:03.999 --rc genhtml_function_coverage=1 00:18:03.999 --rc genhtml_legend=1 00:18:03.999 --rc geninfo_all_blocks=1 00:18:03.999 --rc geninfo_unexecuted_blocks=1 00:18:03.999 00:18:03.999 ' 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.999 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:04.000 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=53d33fd5-3ca0-4e35-bb16-051daf50f149 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1c968ca9-ef04-4f25-a026-ad9e75a17fee 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=206aa72d-bd0f-4277-bfc4-5bd917c5a78d 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:04.000 21:47:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.571 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:10.572 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:10.572 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:10.572 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:10.572 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # rdma_device_init 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@526 -- # allocate_nic_ips 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:10.572 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:10.572 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:10.572 altname enp217s0f0np0 00:18:10.572 altname ens818f0np0 00:18:10.572 inet 192.168.100.8/24 scope global mlx_0_0 00:18:10.572 valid_lft forever preferred_lft forever 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:10.572 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:10.572 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:10.572 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:10.572 altname enp217s0f1np1 00:18:10.572 altname ens818f1np1 00:18:10.572 inet 192.168.100.9/24 scope global mlx_0_1 00:18:10.573 valid_lft forever preferred_lft forever 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:18:10.573 192.168.100.9' 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:18:10.573 192.168.100.9' 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # head -n 1 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:18:10.573 192.168.100.9' 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # tail -n +2 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # head -n 1 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=3038075 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 3038075 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3038075 ']' 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:10.573 [2024-11-29 21:47:42.326026] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:10.573 [2024-11-29 21:47:42.326080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.573 [2024-11-29 21:47:42.396741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.573 [2024-11-29 21:47:42.435144] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.573 [2024-11-29 21:47:42.435185] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.573 [2024-11-29 21:47:42.435195] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.573 [2024-11-29 21:47:42.435203] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.573 [2024-11-29 21:47:42.435210] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.573 [2024-11-29 21:47:42.435232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.573 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:10.573 [2024-11-29 21:47:42.759033] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x156bf70/0x1570420) succeed. 00:18:10.573 [2024-11-29 21:47:42.768117] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x156d420/0x15b1ac0) succeed. 00:18:10.832 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:10.833 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:10.833 21:47:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:10.833 Malloc1 00:18:10.833 21:47:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:11.090 Malloc2 00:18:11.090 21:47:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:11.348 21:47:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:11.348 21:47:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:11.607 [2024-11-29 21:47:43.748602] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:11.607 21:47:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:11.607 21:47:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 206aa72d-bd0f-4277-bfc4-5bd917c5a78d -a 192.168.100.8 -s 4420 -i 4 00:18:11.865 21:47:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:11.865 21:47:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:11.865 21:47:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.865 21:47:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:11.865 21:47:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:14.401 [ 0]:0x1 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f327c41b065d4e49ade645eee01f40e7 00:18:14.401 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f327c41b065d4e49ade645eee01f40e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:14.402 [ 0]:0x1 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f327c41b065d4e49ade645eee01f40e7 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f327c41b065d4e49ade645eee01f40e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:14.402 [ 1]:0x2 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48aa553290294dc2af66f05cbbca93d4 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48aa553290294dc2af66f05cbbca93d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:14.402 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:14.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.662 21:47:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:14.921 21:47:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:15.179 21:47:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:15.179 21:47:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 206aa72d-bd0f-4277-bfc4-5bd917c5a78d -a 192.168.100.8 -s 4420 -i 4 00:18:15.438 21:47:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:15.438 21:47:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:15.438 21:47:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:15.438 21:47:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:18:15.438 21:47:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:18:15.438 21:47:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.410 [ 0]:0x2 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.410 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.670 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48aa553290294dc2af66f05cbbca93d4 00:18:17.670 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48aa553290294dc2af66f05cbbca93d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.670 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:17.670 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:17.670 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.670 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.670 [ 0]:0x1 00:18:17.670 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.670 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.930 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f327c41b065d4e49ade645eee01f40e7 00:18:17.930 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f327c41b065d4e49ade645eee01f40e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.930 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:17.930 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.930 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.930 [ 1]:0x2 00:18:17.930 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.930 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.930 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48aa553290294dc2af66f05cbbca93d4 00:18:17.930 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48aa553290294dc2af66f05cbbca93d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.930 21:47:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:17.930 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:17.930 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:17.930 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:17.930 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:17.930 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.930 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:17.930 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.930 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:17.930 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.930 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.930 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.930 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:18.189 [ 0]:0x2 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48aa553290294dc2af66f05cbbca93d4 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48aa553290294dc2af66f05cbbca93d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:18.189 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:18.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.448 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:18.707 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:18.707 21:47:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 206aa72d-bd0f-4277-bfc4-5bd917c5a78d -a 192.168.100.8 -s 4420 -i 4 00:18:18.967 21:47:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:18.967 21:47:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:18.967 21:47:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.967 21:47:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:18.967 21:47:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:18.967 21:47:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:20.872 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:20.872 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:20.872 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:21.132 [ 0]:0x1 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f327c41b065d4e49ade645eee01f40e7 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f327c41b065d4e49ade645eee01f40e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:21.132 [ 1]:0x2 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48aa553290294dc2af66f05cbbca93d4 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48aa553290294dc2af66f05cbbca93d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.132 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.391 [ 0]:0x2 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48aa553290294dc2af66f05cbbca93d4 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48aa553290294dc2af66f05cbbca93d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:21.391 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:21.392 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.392 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:21.392 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.392 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:21.392 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.392 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:21.392 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:21.392 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:21.650 [2024-11-29 21:47:53.767982] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:21.650 request: 00:18:21.650 { 00:18:21.650 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.650 "nsid": 2, 00:18:21.650 "host": "nqn.2016-06.io.spdk:host1", 00:18:21.650 "method": "nvmf_ns_remove_host", 00:18:21.650 "req_id": 1 00:18:21.650 } 00:18:21.650 Got JSON-RPC error response 00:18:21.650 response: 00:18:21.650 { 00:18:21.650 "code": -32602, 00:18:21.650 "message": "Invalid parameters" 00:18:21.650 } 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:21.650 [ 0]:0x2 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:21.650 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.908 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48aa553290294dc2af66f05cbbca93d4 00:18:21.908 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48aa553290294dc2af66f05cbbca93d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.908 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:21.908 21:47:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:22.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.166 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:22.166 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3040358 00:18:22.166 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.166 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3040358 /var/tmp/host.sock 00:18:22.166 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3040358 ']' 00:18:22.166 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:22.166 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.166 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:22.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:22.166 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.166 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:22.166 [2024-11-29 21:47:54.246017] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:22.166 [2024-11-29 21:47:54.246068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3040358 ] 00:18:22.166 [2024-11-29 21:47:54.313296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.166 [2024-11-29 21:47:54.351311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.424 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:22.424 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:22.424 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:22.683 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:22.683 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 53d33fd5-3ca0-4e35-bb16-051daf50f149 00:18:22.683 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:18:22.683 21:47:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 53D33FD53CA04E35BB16051DAF50F149 -i 00:18:22.941 21:47:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1c968ca9-ef04-4f25-a026-ad9e75a17fee 00:18:22.941 21:47:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:18:22.941 21:47:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1C968CA9EF044F25A026AD9E75A17FEE -i 00:18:23.198 21:47:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:23.456 21:47:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:23.456 21:47:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:23.456 21:47:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:23.716 nvme0n1 00:18:23.716 21:47:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:23.716 21:47:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:23.974 nvme1n2 00:18:23.974 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:23.974 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:23.974 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:23.974 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:23.974 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:24.232 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:24.232 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:24.232 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:24.232 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:24.491 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 53d33fd5-3ca0-4e35-bb16-051daf50f149 == \5\3\d\3\3\f\d\5\-\3\c\a\0\-\4\e\3\5\-\b\b\1\6\-\0\5\1\d\a\f\5\0\f\1\4\9 ]] 00:18:24.491 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:24.491 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:24.491 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:24.751 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1c968ca9-ef04-4f25-a026-ad9e75a17fee == \1\c\9\6\8\c\a\9\-\e\f\0\4\-\4\f\2\5\-\a\0\2\6\-\a\d\9\e\7\5\a\1\7\f\e\e ]] 00:18:24.751 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3040358 00:18:24.751 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3040358 ']' 00:18:24.751 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3040358 00:18:24.751 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:24.751 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:24.751 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3040358 00:18:24.751 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:24.751 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:24.751 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3040358' 00:18:24.751 killing process with pid 3040358 00:18:24.751 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3040358 00:18:24.751 21:47:56 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3040358 00:18:25.011 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:25.270 rmmod nvme_rdma 00:18:25.270 rmmod nvme_fabrics 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 3038075 ']' 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 3038075 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3038075 ']' 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3038075 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3038075 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3038075' 00:18:25.270 killing process with pid 3038075 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3038075 00:18:25.270 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3038075 00:18:25.530 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:25.530 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:18:25.530 00:18:25.530 real 0m21.761s 00:18:25.530 user 0m24.533s 00:18:25.530 sys 0m6.859s 00:18:25.530 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:25.530 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:25.530 ************************************ 00:18:25.530 END TEST nvmf_ns_masking 00:18:25.530 ************************************ 00:18:25.530 21:47:57 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:25.530 21:47:57 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:18:25.530 21:47:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:25.530 21:47:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:25.530 21:47:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:25.790 ************************************ 00:18:25.790 START TEST nvmf_nvme_cli 00:18:25.790 ************************************ 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:18:25.790 * Looking for test storage... 00:18:25.790 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.790 21:47:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:25.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.790 --rc genhtml_branch_coverage=1 00:18:25.790 --rc genhtml_function_coverage=1 00:18:25.790 --rc genhtml_legend=1 00:18:25.790 --rc geninfo_all_blocks=1 00:18:25.790 --rc geninfo_unexecuted_blocks=1 00:18:25.790 00:18:25.790 ' 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:25.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.790 --rc genhtml_branch_coverage=1 00:18:25.790 --rc genhtml_function_coverage=1 00:18:25.790 --rc genhtml_legend=1 00:18:25.790 --rc geninfo_all_blocks=1 00:18:25.790 --rc geninfo_unexecuted_blocks=1 00:18:25.790 00:18:25.790 ' 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:25.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.790 --rc genhtml_branch_coverage=1 00:18:25.790 --rc genhtml_function_coverage=1 00:18:25.790 --rc genhtml_legend=1 00:18:25.790 --rc geninfo_all_blocks=1 00:18:25.790 --rc geninfo_unexecuted_blocks=1 00:18:25.790 00:18:25.790 ' 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:25.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.790 --rc genhtml_branch_coverage=1 00:18:25.790 --rc genhtml_function_coverage=1 00:18:25.790 --rc genhtml_legend=1 00:18:25.790 --rc geninfo_all_blocks=1 00:18:25.790 --rc geninfo_unexecuted_blocks=1 00:18:25.790 00:18:25.790 ' 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.790 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.051 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:26.051 21:47:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.653 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.653 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:32.653 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:32.653 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:32.653 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:32.653 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:32.653 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:32.653 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:32.653 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:32.653 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:32.653 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:32.654 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:32.654 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:32.654 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:32.654 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # rdma_device_init 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # allocate_nic_ips 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:32.654 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:32.654 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:32.654 altname enp217s0f0np0 00:18:32.654 altname ens818f0np0 00:18:32.654 inet 192.168.100.8/24 scope global mlx_0_0 00:18:32.654 valid_lft forever preferred_lft forever 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:32.654 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:32.655 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:32.655 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:32.655 altname enp217s0f1np1 00:18:32.655 altname ens818f1np1 00:18:32.655 inet 192.168.100.9/24 scope global mlx_0_1 00:18:32.655 valid_lft forever preferred_lft forever 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:18:32.655 192.168.100.9' 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:18:32.655 192.168.100.9' 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # head -n 1 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:18:32.655 192.168.100.9' 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # tail -n +2 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # head -n 1 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=3044742 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 3044742 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3044742 ']' 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:32.655 21:48:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.914 [2024-11-29 21:48:04.943854] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:32.914 [2024-11-29 21:48:04.943912] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.914 [2024-11-29 21:48:05.014954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.914 [2024-11-29 21:48:05.056183] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.914 [2024-11-29 21:48:05.056228] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.914 [2024-11-29 21:48:05.056237] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.914 [2024-11-29 21:48:05.056245] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.914 [2024-11-29 21:48:05.056252] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.914 [2024-11-29 21:48:05.056297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.914 [2024-11-29 21:48:05.056393] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.914 [2024-11-29 21:48:05.056482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.914 [2024-11-29 21:48:05.056483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.914 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:32.914 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:32.914 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:32.914 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:32.914 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.172 [2024-11-29 21:48:05.236237] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1722f50/0x1727400) succeed. 00:18:33.172 [2024-11-29 21:48:05.246879] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1724540/0x1768aa0) succeed. 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.172 Malloc0 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.172 Malloc1 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.172 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.431 [2024-11-29 21:48:05.447769] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:18:33.431 00:18:33.431 Discovery Log Number of Records 2, Generation counter 2 00:18:33.431 =====Discovery Log Entry 0====== 00:18:33.431 trtype: rdma 00:18:33.431 adrfam: ipv4 00:18:33.431 subtype: current discovery subsystem 00:18:33.431 treq: not required 00:18:33.431 portid: 0 00:18:33.431 trsvcid: 4420 00:18:33.431 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:33.431 traddr: 192.168.100.8 00:18:33.431 eflags: explicit discovery connections, duplicate discovery information 00:18:33.431 rdma_prtype: not specified 00:18:33.431 rdma_qptype: connected 00:18:33.431 rdma_cms: rdma-cm 00:18:33.431 rdma_pkey: 0x0000 00:18:33.431 =====Discovery Log Entry 1====== 00:18:33.431 trtype: rdma 00:18:33.431 adrfam: ipv4 00:18:33.431 subtype: nvme subsystem 00:18:33.431 treq: not required 00:18:33.431 portid: 0 00:18:33.431 trsvcid: 4420 00:18:33.431 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:33.431 traddr: 192.168.100.8 00:18:33.431 eflags: none 00:18:33.431 rdma_prtype: not specified 00:18:33.431 rdma_qptype: connected 00:18:33.431 rdma_cms: rdma-cm 00:18:33.431 rdma_pkey: 0x0000 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:33.431 21:48:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:34.364 21:48:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:34.364 21:48:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:34.364 21:48:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.364 21:48:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:34.364 21:48:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:34.364 21:48:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:36.897 /dev/nvme0n2 ]] 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:36.897 21:48:08 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:37.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:37.466 rmmod nvme_rdma 00:18:37.466 rmmod nvme_fabrics 00:18:37.466 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 3044742 ']' 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 3044742 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3044742 ']' 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3044742 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3044742 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3044742' 00:18:37.725 killing process with pid 3044742 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3044742 00:18:37.725 21:48:09 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3044742 00:18:37.984 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:37.984 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:18:37.984 00:18:37.984 real 0m12.269s 00:18:37.984 user 0m21.814s 00:18:37.984 sys 0m5.886s 00:18:37.984 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.984 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:37.984 ************************************ 00:18:37.984 END TEST nvmf_nvme_cli 00:18:37.984 ************************************ 00:18:37.984 21:48:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:37.984 21:48:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:18:37.984 21:48:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:37.984 21:48:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:37.984 21:48:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:37.984 ************************************ 00:18:37.984 START TEST nvmf_auth_target 00:18:37.984 ************************************ 00:18:37.984 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:18:38.242 * Looking for test storage... 00:18:38.242 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:38.242 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:38.242 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:18:38.242 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:38.242 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:38.242 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:38.242 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:38.242 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:38.242 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:38.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.243 --rc genhtml_branch_coverage=1 00:18:38.243 --rc genhtml_function_coverage=1 00:18:38.243 --rc genhtml_legend=1 00:18:38.243 --rc geninfo_all_blocks=1 00:18:38.243 --rc geninfo_unexecuted_blocks=1 00:18:38.243 00:18:38.243 ' 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:38.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.243 --rc genhtml_branch_coverage=1 00:18:38.243 --rc genhtml_function_coverage=1 00:18:38.243 --rc genhtml_legend=1 00:18:38.243 --rc geninfo_all_blocks=1 00:18:38.243 --rc geninfo_unexecuted_blocks=1 00:18:38.243 00:18:38.243 ' 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:38.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.243 --rc genhtml_branch_coverage=1 00:18:38.243 --rc genhtml_function_coverage=1 00:18:38.243 --rc genhtml_legend=1 00:18:38.243 --rc geninfo_all_blocks=1 00:18:38.243 --rc geninfo_unexecuted_blocks=1 00:18:38.243 00:18:38.243 ' 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:38.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.243 --rc genhtml_branch_coverage=1 00:18:38.243 --rc genhtml_function_coverage=1 00:18:38.243 --rc genhtml_legend=1 00:18:38.243 --rc geninfo_all_blocks=1 00:18:38.243 --rc geninfo_unexecuted_blocks=1 00:18:38.243 00:18:38.243 ' 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:38.243 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:38.243 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:38.244 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:18:38.244 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.244 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:38.244 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:38.244 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:38.244 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.244 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.244 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.244 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:38.244 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:38.244 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:38.244 21:48:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:18:44.808 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:44.809 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:44.809 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:44.809 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:44.809 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # rdma_device_init 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # allocate_nic_ips 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:44.809 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:44.809 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:44.809 altname enp217s0f0np0 00:18:44.809 altname ens818f0np0 00:18:44.809 inet 192.168.100.8/24 scope global mlx_0_0 00:18:44.809 valid_lft forever preferred_lft forever 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:44.809 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:44.809 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:44.809 altname enp217s0f1np1 00:18:44.809 altname ens818f1np1 00:18:44.809 inet 192.168.100.9/24 scope global mlx_0_1 00:18:44.809 valid_lft forever preferred_lft forever 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:44.809 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:18:44.810 192.168.100.9' 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:18:44.810 192.168.100.9' 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # head -n 1 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:18:44.810 192.168.100.9' 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # tail -n +2 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # head -n 1 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3048910 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3048910 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3048910 ']' 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.810 21:48:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3048986 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:44.810 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=66b019f57fba9e374acf62d11e534d1d88bda826a460325c 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.oSp 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 66b019f57fba9e374acf62d11e534d1d88bda826a460325c 0 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 66b019f57fba9e374acf62d11e534d1d88bda826a460325c 0 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=66b019f57fba9e374acf62d11e534d1d88bda826a460325c 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.oSp 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.oSp 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.oSp 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=f196303a25a81b211a95e04fb4c3d9caed5026e2927a226e9196e8a9196073e3 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.qgn 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key f196303a25a81b211a95e04fb4c3d9caed5026e2927a226e9196e8a9196073e3 3 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 f196303a25a81b211a95e04fb4c3d9caed5026e2927a226e9196e8a9196073e3 3 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=f196303a25a81b211a95e04fb4c3d9caed5026e2927a226e9196e8a9196073e3 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.qgn 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.qgn 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.qgn 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=af6fdeef67c69e9201c6db1c1868e5ad 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.pNs 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key af6fdeef67c69e9201c6db1c1868e5ad 1 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 af6fdeef67c69e9201c6db1c1868e5ad 1 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=af6fdeef67c69e9201c6db1c1868e5ad 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.pNs 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.pNs 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.pNs 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=cccdbe3ca8c441dc5791177b3cb89eb1720ab46a4e6bc7cc 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:18:45.070 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.hHm 00:18:45.071 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key cccdbe3ca8c441dc5791177b3cb89eb1720ab46a4e6bc7cc 2 00:18:45.071 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 cccdbe3ca8c441dc5791177b3cb89eb1720ab46a4e6bc7cc 2 00:18:45.071 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:45.071 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:45.071 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=cccdbe3ca8c441dc5791177b3cb89eb1720ab46a4e6bc7cc 00:18:45.071 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:18:45.071 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:45.071 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.hHm 00:18:45.071 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.hHm 00:18:45.071 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.hHm 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=7427d772b74b6b05c24496d118b53edc3c905bf994eecee5 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.qAX 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 7427d772b74b6b05c24496d118b53edc3c905bf994eecee5 2 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 7427d772b74b6b05c24496d118b53edc3c905bf994eecee5 2 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=7427d772b74b6b05c24496d118b53edc3c905bf994eecee5 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.qAX 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.qAX 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.qAX 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:45.330 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=e2785bdf47e17eec5aa21cb58eed73c1 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Ixp 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key e2785bdf47e17eec5aa21cb58eed73c1 1 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 e2785bdf47e17eec5aa21cb58eed73c1 1 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=e2785bdf47e17eec5aa21cb58eed73c1 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Ixp 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Ixp 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Ixp 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=a6307d5d12d5c86b9894421eda48b4df57f408874e01e369d5d6f41c54e3ae65 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.OdX 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key a6307d5d12d5c86b9894421eda48b4df57f408874e01e369d5d6f41c54e3ae65 3 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 a6307d5d12d5c86b9894421eda48b4df57f408874e01e369d5d6f41c54e3ae65 3 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=a6307d5d12d5c86b9894421eda48b4df57f408874e01e369d5d6f41c54e3ae65 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.OdX 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.OdX 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.OdX 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3048910 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3048910 ']' 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:45.331 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.590 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.590 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:45.590 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3048986 /var/tmp/host.sock 00:18:45.590 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3048986 ']' 00:18:45.590 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:45.590 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:45.590 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:45.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:45.590 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:45.590 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.849 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.849 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:45.849 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:45.849 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.849 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.849 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.849 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:45.849 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oSp 00:18:45.849 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.849 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.849 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.849 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.oSp 00:18:45.849 21:48:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.oSp 00:18:46.108 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.qgn ]] 00:18:46.108 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qgn 00:18:46.108 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.108 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.108 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.108 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qgn 00:18:46.108 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qgn 00:18:46.108 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:46.108 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.pNs 00:18:46.108 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.108 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.368 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.368 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.pNs 00:18:46.368 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.pNs 00:18:46.368 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.hHm ]] 00:18:46.368 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hHm 00:18:46.368 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.368 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.368 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.368 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hHm 00:18:46.368 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hHm 00:18:46.627 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:46.627 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qAX 00:18:46.627 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.627 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.627 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.627 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.qAX 00:18:46.627 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.qAX 00:18:46.887 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Ixp ]] 00:18:46.887 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ixp 00:18:46.887 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.887 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.887 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.887 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ixp 00:18:46.887 21:48:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ixp 00:18:46.887 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:46.887 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.OdX 00:18:46.887 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.887 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.887 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.887 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.OdX 00:18:46.887 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.OdX 00:18:47.146 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:47.146 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:47.146 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.146 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.146 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:47.146 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:47.405 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:47.405 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.405 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.405 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:47.405 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:47.405 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.405 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.405 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.405 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.405 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.405 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.405 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.405 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.664 00:18:47.664 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.664 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.664 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.924 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.924 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.924 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.924 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.924 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.924 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.924 { 00:18:47.924 "cntlid": 1, 00:18:47.924 "qid": 0, 00:18:47.924 "state": "enabled", 00:18:47.924 "thread": "nvmf_tgt_poll_group_000", 00:18:47.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:18:47.925 "listen_address": { 00:18:47.925 "trtype": "RDMA", 00:18:47.925 "adrfam": "IPv4", 00:18:47.925 "traddr": "192.168.100.8", 00:18:47.925 "trsvcid": "4420" 00:18:47.925 }, 00:18:47.925 "peer_address": { 00:18:47.925 "trtype": "RDMA", 00:18:47.925 "adrfam": "IPv4", 00:18:47.925 "traddr": "192.168.100.8", 00:18:47.925 "trsvcid": "35762" 00:18:47.925 }, 00:18:47.925 "auth": { 00:18:47.925 "state": "completed", 00:18:47.925 "digest": "sha256", 00:18:47.925 "dhgroup": "null" 00:18:47.925 } 00:18:47.925 } 00:18:47.925 ]' 00:18:47.925 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.925 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.925 21:48:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.925 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:47.925 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.925 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.925 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.925 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.184 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:18:48.184 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:18:48.751 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.751 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:48.751 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.751 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.009 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.009 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.009 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.009 21:48:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.009 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:49.009 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.009 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:49.009 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:49.009 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:49.009 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.009 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.010 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.010 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.010 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.010 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.010 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.010 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.268 00:18:49.268 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.268 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.268 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.550 { 00:18:49.550 "cntlid": 3, 00:18:49.550 "qid": 0, 00:18:49.550 "state": "enabled", 00:18:49.550 "thread": "nvmf_tgt_poll_group_000", 00:18:49.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:18:49.550 "listen_address": { 00:18:49.550 "trtype": "RDMA", 00:18:49.550 "adrfam": "IPv4", 00:18:49.550 "traddr": "192.168.100.8", 00:18:49.550 "trsvcid": "4420" 00:18:49.550 }, 00:18:49.550 "peer_address": { 00:18:49.550 "trtype": "RDMA", 00:18:49.550 "adrfam": "IPv4", 00:18:49.550 "traddr": "192.168.100.8", 00:18:49.550 "trsvcid": "49179" 00:18:49.550 }, 00:18:49.550 "auth": { 00:18:49.550 "state": "completed", 00:18:49.550 "digest": "sha256", 00:18:49.550 "dhgroup": "null" 00:18:49.550 } 00:18:49.550 } 00:18:49.550 ]' 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.550 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.809 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:18:49.809 21:48:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:18:50.376 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.635 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:50.635 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.635 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.635 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.635 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.635 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:50.635 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:50.894 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:50.894 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.894 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:50.894 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:50.894 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:50.894 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.894 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.894 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.894 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.894 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.894 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.894 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.894 21:48:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.894 00:18:51.153 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.153 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.153 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.153 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.153 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.153 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.153 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.153 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.153 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.153 { 00:18:51.153 "cntlid": 5, 00:18:51.153 "qid": 0, 00:18:51.153 "state": "enabled", 00:18:51.153 "thread": "nvmf_tgt_poll_group_000", 00:18:51.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:18:51.153 "listen_address": { 00:18:51.153 "trtype": "RDMA", 00:18:51.153 "adrfam": "IPv4", 00:18:51.153 "traddr": "192.168.100.8", 00:18:51.153 "trsvcid": "4420" 00:18:51.153 }, 00:18:51.153 "peer_address": { 00:18:51.153 "trtype": "RDMA", 00:18:51.153 "adrfam": "IPv4", 00:18:51.153 "traddr": "192.168.100.8", 00:18:51.153 "trsvcid": "38137" 00:18:51.153 }, 00:18:51.153 "auth": { 00:18:51.153 "state": "completed", 00:18:51.153 "digest": "sha256", 00:18:51.153 "dhgroup": "null" 00:18:51.153 } 00:18:51.153 } 00:18:51.153 ]' 00:18:51.153 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.412 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.412 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.412 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:51.412 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.412 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.412 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.412 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.670 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:18:51.670 21:48:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:18:52.238 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.238 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:52.238 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.238 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.238 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.238 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.238 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:52.238 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:52.497 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:52.498 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.498 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:52.498 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:52.498 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:52.498 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.498 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:52.498 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.498 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.498 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.498 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:52.498 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.498 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.756 00:18:52.756 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.757 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.757 21:48:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.015 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.016 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.016 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.016 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.016 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.016 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.016 { 00:18:53.016 "cntlid": 7, 00:18:53.016 "qid": 0, 00:18:53.016 "state": "enabled", 00:18:53.016 "thread": "nvmf_tgt_poll_group_000", 00:18:53.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:18:53.016 "listen_address": { 00:18:53.016 "trtype": "RDMA", 00:18:53.016 "adrfam": "IPv4", 00:18:53.016 "traddr": "192.168.100.8", 00:18:53.016 "trsvcid": "4420" 00:18:53.016 }, 00:18:53.016 "peer_address": { 00:18:53.016 "trtype": "RDMA", 00:18:53.016 "adrfam": "IPv4", 00:18:53.016 "traddr": "192.168.100.8", 00:18:53.016 "trsvcid": "38986" 00:18:53.016 }, 00:18:53.016 "auth": { 00:18:53.016 "state": "completed", 00:18:53.016 "digest": "sha256", 00:18:53.016 "dhgroup": "null" 00:18:53.016 } 00:18:53.016 } 00:18:53.016 ]' 00:18:53.016 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.016 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.016 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.016 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:53.016 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.016 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.016 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.016 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.274 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:18:53.274 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:18:53.842 21:48:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.842 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:53.842 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.842 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.842 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.842 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.842 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.842 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.842 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.101 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:54.101 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.101 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.101 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:54.101 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:54.101 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.101 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.101 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.101 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.101 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.101 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.101 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.101 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.359 00:18:54.359 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.359 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.359 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.619 { 00:18:54.619 "cntlid": 9, 00:18:54.619 "qid": 0, 00:18:54.619 "state": "enabled", 00:18:54.619 "thread": "nvmf_tgt_poll_group_000", 00:18:54.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:18:54.619 "listen_address": { 00:18:54.619 "trtype": "RDMA", 00:18:54.619 "adrfam": "IPv4", 00:18:54.619 "traddr": "192.168.100.8", 00:18:54.619 "trsvcid": "4420" 00:18:54.619 }, 00:18:54.619 "peer_address": { 00:18:54.619 "trtype": "RDMA", 00:18:54.619 "adrfam": "IPv4", 00:18:54.619 "traddr": "192.168.100.8", 00:18:54.619 "trsvcid": "48121" 00:18:54.619 }, 00:18:54.619 "auth": { 00:18:54.619 "state": "completed", 00:18:54.619 "digest": "sha256", 00:18:54.619 "dhgroup": "ffdhe2048" 00:18:54.619 } 00:18:54.619 } 00:18:54.619 ]' 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.619 21:48:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.878 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:18:54.878 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:18:55.446 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.705 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:55.705 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.705 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.705 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.705 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.705 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.705 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.965 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:55.965 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.965 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:55.965 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:55.965 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:55.965 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.965 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.965 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.965 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.965 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.965 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.965 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.965 21:48:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.224 00:18:56.224 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.224 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.224 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.224 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.224 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.224 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.224 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.224 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.224 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.224 { 00:18:56.224 "cntlid": 11, 00:18:56.224 "qid": 0, 00:18:56.224 "state": "enabled", 00:18:56.224 "thread": "nvmf_tgt_poll_group_000", 00:18:56.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:18:56.224 "listen_address": { 00:18:56.224 "trtype": "RDMA", 00:18:56.224 "adrfam": "IPv4", 00:18:56.224 "traddr": "192.168.100.8", 00:18:56.224 "trsvcid": "4420" 00:18:56.224 }, 00:18:56.224 "peer_address": { 00:18:56.224 "trtype": "RDMA", 00:18:56.224 "adrfam": "IPv4", 00:18:56.224 "traddr": "192.168.100.8", 00:18:56.224 "trsvcid": "36930" 00:18:56.224 }, 00:18:56.224 "auth": { 00:18:56.224 "state": "completed", 00:18:56.224 "digest": "sha256", 00:18:56.224 "dhgroup": "ffdhe2048" 00:18:56.224 } 00:18:56.224 } 00:18:56.224 ]' 00:18:56.224 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.483 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.483 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.483 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:56.483 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.483 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.483 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.483 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.743 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:18:56.743 21:48:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:18:57.310 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.310 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:57.310 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.310 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.310 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.310 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.310 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:57.310 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:57.568 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:57.568 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.568 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:57.568 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:57.568 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:57.568 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.568 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.568 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.568 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.568 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.568 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.568 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.568 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.827 00:18:57.827 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.827 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.827 21:48:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.086 { 00:18:58.086 "cntlid": 13, 00:18:58.086 "qid": 0, 00:18:58.086 "state": "enabled", 00:18:58.086 "thread": "nvmf_tgt_poll_group_000", 00:18:58.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:18:58.086 "listen_address": { 00:18:58.086 "trtype": "RDMA", 00:18:58.086 "adrfam": "IPv4", 00:18:58.086 "traddr": "192.168.100.8", 00:18:58.086 "trsvcid": "4420" 00:18:58.086 }, 00:18:58.086 "peer_address": { 00:18:58.086 "trtype": "RDMA", 00:18:58.086 "adrfam": "IPv4", 00:18:58.086 "traddr": "192.168.100.8", 00:18:58.086 "trsvcid": "41986" 00:18:58.086 }, 00:18:58.086 "auth": { 00:18:58.086 "state": "completed", 00:18:58.086 "digest": "sha256", 00:18:58.086 "dhgroup": "ffdhe2048" 00:18:58.086 } 00:18:58.086 } 00:18:58.086 ]' 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.086 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.345 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:18:58.345 21:48:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:18:58.912 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.171 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:59.171 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.171 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.171 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.171 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.171 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:59.171 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:59.430 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:59.430 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.430 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:59.430 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:59.430 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:59.430 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.430 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:59.430 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.430 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.430 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.430 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:59.430 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:59.430 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:59.689 00:18:59.689 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.689 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.689 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.689 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.689 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.689 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.689 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.689 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.689 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.689 { 00:18:59.689 "cntlid": 15, 00:18:59.689 "qid": 0, 00:18:59.689 "state": "enabled", 00:18:59.689 "thread": "nvmf_tgt_poll_group_000", 00:18:59.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:18:59.689 "listen_address": { 00:18:59.689 "trtype": "RDMA", 00:18:59.689 "adrfam": "IPv4", 00:18:59.689 "traddr": "192.168.100.8", 00:18:59.689 "trsvcid": "4420" 00:18:59.689 }, 00:18:59.689 "peer_address": { 00:18:59.689 "trtype": "RDMA", 00:18:59.689 "adrfam": "IPv4", 00:18:59.689 "traddr": "192.168.100.8", 00:18:59.689 "trsvcid": "35418" 00:18:59.689 }, 00:18:59.689 "auth": { 00:18:59.689 "state": "completed", 00:18:59.689 "digest": "sha256", 00:18:59.689 "dhgroup": "ffdhe2048" 00:18:59.689 } 00:18:59.689 } 00:18:59.689 ]' 00:18:59.689 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.689 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.948 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.948 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:59.948 21:48:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.948 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.948 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.948 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.207 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:00.207 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:00.775 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.775 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:00.775 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.775 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.775 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.775 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.775 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.775 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.775 21:48:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:01.034 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:01.034 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.034 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:01.034 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:01.034 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:01.034 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.034 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.034 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.034 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.034 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.034 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.034 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.034 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.293 00:19:01.293 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.293 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.293 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.552 { 00:19:01.552 "cntlid": 17, 00:19:01.552 "qid": 0, 00:19:01.552 "state": "enabled", 00:19:01.552 "thread": "nvmf_tgt_poll_group_000", 00:19:01.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:01.552 "listen_address": { 00:19:01.552 "trtype": "RDMA", 00:19:01.552 "adrfam": "IPv4", 00:19:01.552 "traddr": "192.168.100.8", 00:19:01.552 "trsvcid": "4420" 00:19:01.552 }, 00:19:01.552 "peer_address": { 00:19:01.552 "trtype": "RDMA", 00:19:01.552 "adrfam": "IPv4", 00:19:01.552 "traddr": "192.168.100.8", 00:19:01.552 "trsvcid": "52235" 00:19:01.552 }, 00:19:01.552 "auth": { 00:19:01.552 "state": "completed", 00:19:01.552 "digest": "sha256", 00:19:01.552 "dhgroup": "ffdhe3072" 00:19:01.552 } 00:19:01.552 } 00:19:01.552 ]' 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.552 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.811 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:01.811 21:48:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:02.379 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.639 21:48:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.898 00:19:02.898 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.898 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.898 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.156 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.156 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.157 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.157 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.157 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.157 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.157 { 00:19:03.157 "cntlid": 19, 00:19:03.157 "qid": 0, 00:19:03.157 "state": "enabled", 00:19:03.157 "thread": "nvmf_tgt_poll_group_000", 00:19:03.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:03.157 "listen_address": { 00:19:03.157 "trtype": "RDMA", 00:19:03.157 "adrfam": "IPv4", 00:19:03.157 "traddr": "192.168.100.8", 00:19:03.157 "trsvcid": "4420" 00:19:03.157 }, 00:19:03.157 "peer_address": { 00:19:03.157 "trtype": "RDMA", 00:19:03.157 "adrfam": "IPv4", 00:19:03.157 "traddr": "192.168.100.8", 00:19:03.157 "trsvcid": "40347" 00:19:03.157 }, 00:19:03.157 "auth": { 00:19:03.157 "state": "completed", 00:19:03.157 "digest": "sha256", 00:19:03.157 "dhgroup": "ffdhe3072" 00:19:03.157 } 00:19:03.157 } 00:19:03.157 ]' 00:19:03.157 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.157 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.157 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.415 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.415 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.415 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.415 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.415 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.674 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:03.675 21:48:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:04.242 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.242 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:04.242 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.242 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.242 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.242 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.242 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.242 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.501 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:04.501 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.501 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.501 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:04.501 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:04.501 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.501 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.501 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.501 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.501 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.501 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.501 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.502 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.760 00:19:04.760 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.760 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.760 21:48:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.019 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.020 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.020 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.020 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.020 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.020 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.020 { 00:19:05.020 "cntlid": 21, 00:19:05.020 "qid": 0, 00:19:05.020 "state": "enabled", 00:19:05.020 "thread": "nvmf_tgt_poll_group_000", 00:19:05.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:05.020 "listen_address": { 00:19:05.020 "trtype": "RDMA", 00:19:05.020 "adrfam": "IPv4", 00:19:05.020 "traddr": "192.168.100.8", 00:19:05.020 "trsvcid": "4420" 00:19:05.020 }, 00:19:05.020 "peer_address": { 00:19:05.020 "trtype": "RDMA", 00:19:05.020 "adrfam": "IPv4", 00:19:05.020 "traddr": "192.168.100.8", 00:19:05.020 "trsvcid": "40306" 00:19:05.020 }, 00:19:05.020 "auth": { 00:19:05.020 "state": "completed", 00:19:05.020 "digest": "sha256", 00:19:05.020 "dhgroup": "ffdhe3072" 00:19:05.020 } 00:19:05.020 } 00:19:05.020 ]' 00:19:05.020 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.020 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.020 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.020 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.020 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.020 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.020 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.020 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.278 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:05.278 21:48:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:05.846 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.105 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:06.106 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.106 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.106 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.106 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:06.106 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.106 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.364 00:19:06.623 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.623 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.623 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.623 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.623 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.623 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.623 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.623 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.623 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.623 { 00:19:06.623 "cntlid": 23, 00:19:06.623 "qid": 0, 00:19:06.623 "state": "enabled", 00:19:06.623 "thread": "nvmf_tgt_poll_group_000", 00:19:06.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:06.623 "listen_address": { 00:19:06.623 "trtype": "RDMA", 00:19:06.623 "adrfam": "IPv4", 00:19:06.623 "traddr": "192.168.100.8", 00:19:06.623 "trsvcid": "4420" 00:19:06.623 }, 00:19:06.623 "peer_address": { 00:19:06.623 "trtype": "RDMA", 00:19:06.623 "adrfam": "IPv4", 00:19:06.623 "traddr": "192.168.100.8", 00:19:06.623 "trsvcid": "34399" 00:19:06.623 }, 00:19:06.623 "auth": { 00:19:06.623 "state": "completed", 00:19:06.623 "digest": "sha256", 00:19:06.623 "dhgroup": "ffdhe3072" 00:19:06.623 } 00:19:06.623 } 00:19:06.623 ]' 00:19:06.623 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.623 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.623 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.882 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:06.882 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.882 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.882 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.882 21:48:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.141 21:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:07.141 21:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:07.710 21:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.710 21:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:07.710 21:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.710 21:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.710 21:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.710 21:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.710 21:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.710 21:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.710 21:48:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.969 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:07.969 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.969 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.969 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:07.969 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:07.969 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.969 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.969 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.969 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.969 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.969 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.969 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.969 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.228 00:19:08.228 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.228 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.228 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.487 { 00:19:08.487 "cntlid": 25, 00:19:08.487 "qid": 0, 00:19:08.487 "state": "enabled", 00:19:08.487 "thread": "nvmf_tgt_poll_group_000", 00:19:08.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:08.487 "listen_address": { 00:19:08.487 "trtype": "RDMA", 00:19:08.487 "adrfam": "IPv4", 00:19:08.487 "traddr": "192.168.100.8", 00:19:08.487 "trsvcid": "4420" 00:19:08.487 }, 00:19:08.487 "peer_address": { 00:19:08.487 "trtype": "RDMA", 00:19:08.487 "adrfam": "IPv4", 00:19:08.487 "traddr": "192.168.100.8", 00:19:08.487 "trsvcid": "56712" 00:19:08.487 }, 00:19:08.487 "auth": { 00:19:08.487 "state": "completed", 00:19:08.487 "digest": "sha256", 00:19:08.487 "dhgroup": "ffdhe4096" 00:19:08.487 } 00:19:08.487 } 00:19:08.487 ]' 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.487 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.746 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:08.746 21:48:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:09.314 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.573 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:09.573 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.573 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.573 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.573 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.573 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.573 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.832 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:09.832 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.832 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:09.832 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:09.832 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:09.832 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.832 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.832 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.832 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.832 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.832 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.832 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.832 21:48:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.092 00:19:10.092 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.092 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.092 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.352 { 00:19:10.352 "cntlid": 27, 00:19:10.352 "qid": 0, 00:19:10.352 "state": "enabled", 00:19:10.352 "thread": "nvmf_tgt_poll_group_000", 00:19:10.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:10.352 "listen_address": { 00:19:10.352 "trtype": "RDMA", 00:19:10.352 "adrfam": "IPv4", 00:19:10.352 "traddr": "192.168.100.8", 00:19:10.352 "trsvcid": "4420" 00:19:10.352 }, 00:19:10.352 "peer_address": { 00:19:10.352 "trtype": "RDMA", 00:19:10.352 "adrfam": "IPv4", 00:19:10.352 "traddr": "192.168.100.8", 00:19:10.352 "trsvcid": "37952" 00:19:10.352 }, 00:19:10.352 "auth": { 00:19:10.352 "state": "completed", 00:19:10.352 "digest": "sha256", 00:19:10.352 "dhgroup": "ffdhe4096" 00:19:10.352 } 00:19:10.352 } 00:19:10.352 ]' 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.352 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.611 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:10.611 21:48:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:11.181 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.181 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:11.181 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.181 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.181 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.182 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.182 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.182 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.443 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:11.443 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.443 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:11.443 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:11.443 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:11.443 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.443 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.443 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.443 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.443 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.443 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.443 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.443 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.701 00:19:11.701 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.701 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.701 21:48:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.960 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.960 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.960 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.960 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.960 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.960 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.960 { 00:19:11.960 "cntlid": 29, 00:19:11.960 "qid": 0, 00:19:11.960 "state": "enabled", 00:19:11.960 "thread": "nvmf_tgt_poll_group_000", 00:19:11.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:11.960 "listen_address": { 00:19:11.960 "trtype": "RDMA", 00:19:11.960 "adrfam": "IPv4", 00:19:11.960 "traddr": "192.168.100.8", 00:19:11.960 "trsvcid": "4420" 00:19:11.960 }, 00:19:11.960 "peer_address": { 00:19:11.960 "trtype": "RDMA", 00:19:11.960 "adrfam": "IPv4", 00:19:11.960 "traddr": "192.168.100.8", 00:19:11.960 "trsvcid": "48131" 00:19:11.960 }, 00:19:11.960 "auth": { 00:19:11.960 "state": "completed", 00:19:11.960 "digest": "sha256", 00:19:11.960 "dhgroup": "ffdhe4096" 00:19:11.960 } 00:19:11.960 } 00:19:11.960 ]' 00:19:11.960 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.960 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.960 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.960 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:11.960 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.219 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.219 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.219 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.219 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:12.219 21:48:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:12.848 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.106 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.673 00:19:13.673 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.673 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.673 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.673 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.673 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.673 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.673 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.673 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.673 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.673 { 00:19:13.673 "cntlid": 31, 00:19:13.673 "qid": 0, 00:19:13.673 "state": "enabled", 00:19:13.673 "thread": "nvmf_tgt_poll_group_000", 00:19:13.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:13.673 "listen_address": { 00:19:13.673 "trtype": "RDMA", 00:19:13.673 "adrfam": "IPv4", 00:19:13.673 "traddr": "192.168.100.8", 00:19:13.673 "trsvcid": "4420" 00:19:13.673 }, 00:19:13.673 "peer_address": { 00:19:13.673 "trtype": "RDMA", 00:19:13.673 "adrfam": "IPv4", 00:19:13.673 "traddr": "192.168.100.8", 00:19:13.673 "trsvcid": "41789" 00:19:13.673 }, 00:19:13.673 "auth": { 00:19:13.673 "state": "completed", 00:19:13.673 "digest": "sha256", 00:19:13.673 "dhgroup": "ffdhe4096" 00:19:13.673 } 00:19:13.673 } 00:19:13.673 ]' 00:19:13.673 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.673 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.673 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.931 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:13.931 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.931 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.931 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.931 21:48:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.931 21:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:13.932 21:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:14.867 21:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.867 21:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:14.867 21:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.867 21:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.867 21:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.867 21:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.867 21:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.867 21:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.867 21:48:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.867 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:14.867 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.867 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:14.867 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:14.867 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:14.867 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.867 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.867 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.867 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.867 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.867 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.867 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.867 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.436 00:19:15.436 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.436 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.436 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.436 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.436 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.436 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.436 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.436 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.436 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.436 { 00:19:15.436 "cntlid": 33, 00:19:15.436 "qid": 0, 00:19:15.436 "state": "enabled", 00:19:15.436 "thread": "nvmf_tgt_poll_group_000", 00:19:15.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:15.436 "listen_address": { 00:19:15.436 "trtype": "RDMA", 00:19:15.436 "adrfam": "IPv4", 00:19:15.436 "traddr": "192.168.100.8", 00:19:15.436 "trsvcid": "4420" 00:19:15.436 }, 00:19:15.436 "peer_address": { 00:19:15.436 "trtype": "RDMA", 00:19:15.436 "adrfam": "IPv4", 00:19:15.436 "traddr": "192.168.100.8", 00:19:15.436 "trsvcid": "40361" 00:19:15.436 }, 00:19:15.436 "auth": { 00:19:15.436 "state": "completed", 00:19:15.436 "digest": "sha256", 00:19:15.436 "dhgroup": "ffdhe6144" 00:19:15.436 } 00:19:15.436 } 00:19:15.436 ]' 00:19:15.436 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.695 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.695 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.695 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.695 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.695 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.695 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.695 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.953 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:15.953 21:48:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:16.520 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.520 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:16.520 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.520 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.520 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.520 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:16.520 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:16.779 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:16.779 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.779 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.779 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:16.779 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:16.779 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.779 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.779 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.779 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.779 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.779 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.779 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.779 21:48:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.038 00:19:17.038 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.038 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.038 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.297 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.297 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.297 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.297 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.297 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.297 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.297 { 00:19:17.297 "cntlid": 35, 00:19:17.297 "qid": 0, 00:19:17.297 "state": "enabled", 00:19:17.297 "thread": "nvmf_tgt_poll_group_000", 00:19:17.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:17.297 "listen_address": { 00:19:17.297 "trtype": "RDMA", 00:19:17.297 "adrfam": "IPv4", 00:19:17.297 "traddr": "192.168.100.8", 00:19:17.297 "trsvcid": "4420" 00:19:17.297 }, 00:19:17.297 "peer_address": { 00:19:17.297 "trtype": "RDMA", 00:19:17.297 "adrfam": "IPv4", 00:19:17.297 "traddr": "192.168.100.8", 00:19:17.297 "trsvcid": "60170" 00:19:17.297 }, 00:19:17.297 "auth": { 00:19:17.297 "state": "completed", 00:19:17.297 "digest": "sha256", 00:19:17.297 "dhgroup": "ffdhe6144" 00:19:17.297 } 00:19:17.297 } 00:19:17.297 ]' 00:19:17.297 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.297 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.297 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.556 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.556 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.556 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.556 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.556 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.556 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:17.556 21:48:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.492 21:48:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.057 00:19:19.057 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.057 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.057 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.057 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.057 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.058 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.058 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.058 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.058 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.058 { 00:19:19.058 "cntlid": 37, 00:19:19.058 "qid": 0, 00:19:19.058 "state": "enabled", 00:19:19.058 "thread": "nvmf_tgt_poll_group_000", 00:19:19.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:19.058 "listen_address": { 00:19:19.058 "trtype": "RDMA", 00:19:19.058 "adrfam": "IPv4", 00:19:19.058 "traddr": "192.168.100.8", 00:19:19.058 "trsvcid": "4420" 00:19:19.058 }, 00:19:19.058 "peer_address": { 00:19:19.058 "trtype": "RDMA", 00:19:19.058 "adrfam": "IPv4", 00:19:19.058 "traddr": "192.168.100.8", 00:19:19.058 "trsvcid": "51084" 00:19:19.058 }, 00:19:19.058 "auth": { 00:19:19.058 "state": "completed", 00:19:19.058 "digest": "sha256", 00:19:19.058 "dhgroup": "ffdhe6144" 00:19:19.058 } 00:19:19.058 } 00:19:19.058 ]' 00:19:19.058 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.315 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.315 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.315 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.315 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.315 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.315 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.315 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.573 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:19.573 21:48:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:20.139 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.139 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:20.139 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.139 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.139 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.139 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.140 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.140 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.397 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:20.397 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.397 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.397 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:20.397 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:20.397 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.397 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:20.397 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.397 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.397 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.397 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:20.397 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.397 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.655 00:19:20.655 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.655 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.655 21:48:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.913 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.913 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.913 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.913 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.913 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.913 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.913 { 00:19:20.913 "cntlid": 39, 00:19:20.913 "qid": 0, 00:19:20.913 "state": "enabled", 00:19:20.913 "thread": "nvmf_tgt_poll_group_000", 00:19:20.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:20.913 "listen_address": { 00:19:20.913 "trtype": "RDMA", 00:19:20.913 "adrfam": "IPv4", 00:19:20.913 "traddr": "192.168.100.8", 00:19:20.913 "trsvcid": "4420" 00:19:20.913 }, 00:19:20.913 "peer_address": { 00:19:20.913 "trtype": "RDMA", 00:19:20.913 "adrfam": "IPv4", 00:19:20.913 "traddr": "192.168.100.8", 00:19:20.913 "trsvcid": "48603" 00:19:20.913 }, 00:19:20.913 "auth": { 00:19:20.913 "state": "completed", 00:19:20.913 "digest": "sha256", 00:19:20.913 "dhgroup": "ffdhe6144" 00:19:20.913 } 00:19:20.913 } 00:19:20.913 ]' 00:19:20.913 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.913 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.913 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.172 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:21.172 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.172 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.172 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.172 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.431 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:21.431 21:48:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:21.997 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.997 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:21.997 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.997 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.997 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.997 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.997 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.997 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.997 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.256 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:22.256 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.256 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.256 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:22.256 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:22.256 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.256 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.256 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.256 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.256 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.256 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.256 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.256 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.822 00:19:22.822 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.822 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.822 21:48:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.822 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.822 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.822 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.822 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.822 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.822 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.822 { 00:19:22.822 "cntlid": 41, 00:19:22.822 "qid": 0, 00:19:22.822 "state": "enabled", 00:19:22.822 "thread": "nvmf_tgt_poll_group_000", 00:19:22.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:22.822 "listen_address": { 00:19:22.822 "trtype": "RDMA", 00:19:22.822 "adrfam": "IPv4", 00:19:22.822 "traddr": "192.168.100.8", 00:19:22.822 "trsvcid": "4420" 00:19:22.822 }, 00:19:22.822 "peer_address": { 00:19:22.822 "trtype": "RDMA", 00:19:22.822 "adrfam": "IPv4", 00:19:22.822 "traddr": "192.168.100.8", 00:19:22.822 "trsvcid": "33976" 00:19:22.822 }, 00:19:22.822 "auth": { 00:19:22.822 "state": "completed", 00:19:22.822 "digest": "sha256", 00:19:22.822 "dhgroup": "ffdhe8192" 00:19:22.822 } 00:19:22.822 } 00:19:22.822 ]' 00:19:22.822 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.091 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.091 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.091 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:23.091 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.091 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.091 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.091 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.349 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:23.349 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:23.914 21:48:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.914 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:23.914 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.914 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.914 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.914 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.914 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.914 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:24.172 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:24.172 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.172 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.172 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:24.172 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:24.172 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.172 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.172 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.172 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.172 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.172 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.172 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.172 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.738 00:19:24.738 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.738 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.738 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.738 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.738 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.738 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.738 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.738 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.738 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.738 { 00:19:24.738 "cntlid": 43, 00:19:24.738 "qid": 0, 00:19:24.738 "state": "enabled", 00:19:24.738 "thread": "nvmf_tgt_poll_group_000", 00:19:24.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:24.738 "listen_address": { 00:19:24.738 "trtype": "RDMA", 00:19:24.738 "adrfam": "IPv4", 00:19:24.738 "traddr": "192.168.100.8", 00:19:24.738 "trsvcid": "4420" 00:19:24.738 }, 00:19:24.738 "peer_address": { 00:19:24.738 "trtype": "RDMA", 00:19:24.738 "adrfam": "IPv4", 00:19:24.738 "traddr": "192.168.100.8", 00:19:24.738 "trsvcid": "47989" 00:19:24.738 }, 00:19:24.738 "auth": { 00:19:24.738 "state": "completed", 00:19:24.738 "digest": "sha256", 00:19:24.738 "dhgroup": "ffdhe8192" 00:19:24.738 } 00:19:24.738 } 00:19:24.738 ]' 00:19:24.738 21:48:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.996 21:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.996 21:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.996 21:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.996 21:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.996 21:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.996 21:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.996 21:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.254 21:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:25.254 21:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:25.820 21:48:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.820 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:25.820 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.820 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.820 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.820 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.820 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.820 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.078 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:26.078 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.078 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:26.078 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:26.078 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:26.078 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.078 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.078 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.078 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.078 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.078 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.078 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.078 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.643 00:19:26.643 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.643 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.643 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.900 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.900 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.900 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.900 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.900 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.900 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.900 { 00:19:26.900 "cntlid": 45, 00:19:26.900 "qid": 0, 00:19:26.900 "state": "enabled", 00:19:26.900 "thread": "nvmf_tgt_poll_group_000", 00:19:26.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:26.900 "listen_address": { 00:19:26.900 "trtype": "RDMA", 00:19:26.900 "adrfam": "IPv4", 00:19:26.900 "traddr": "192.168.100.8", 00:19:26.900 "trsvcid": "4420" 00:19:26.900 }, 00:19:26.900 "peer_address": { 00:19:26.900 "trtype": "RDMA", 00:19:26.900 "adrfam": "IPv4", 00:19:26.900 "traddr": "192.168.100.8", 00:19:26.900 "trsvcid": "51132" 00:19:26.900 }, 00:19:26.900 "auth": { 00:19:26.900 "state": "completed", 00:19:26.900 "digest": "sha256", 00:19:26.900 "dhgroup": "ffdhe8192" 00:19:26.900 } 00:19:26.900 } 00:19:26.900 ]' 00:19:26.900 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.900 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.900 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.900 21:48:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.900 21:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.900 21:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.900 21:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.900 21:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.157 21:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:27.158 21:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:27.723 21:48:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.981 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:27.981 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.981 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.981 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.981 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.981 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.981 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.239 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:28.239 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.239 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:28.239 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:28.239 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:28.239 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.239 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:28.239 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.239 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.239 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.239 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:28.239 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.239 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.805 00:19:28.805 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.805 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.805 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.805 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.805 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.805 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.805 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.805 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.805 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.805 { 00:19:28.805 "cntlid": 47, 00:19:28.805 "qid": 0, 00:19:28.805 "state": "enabled", 00:19:28.805 "thread": "nvmf_tgt_poll_group_000", 00:19:28.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:28.805 "listen_address": { 00:19:28.805 "trtype": "RDMA", 00:19:28.805 "adrfam": "IPv4", 00:19:28.805 "traddr": "192.168.100.8", 00:19:28.805 "trsvcid": "4420" 00:19:28.805 }, 00:19:28.805 "peer_address": { 00:19:28.805 "trtype": "RDMA", 00:19:28.805 "adrfam": "IPv4", 00:19:28.805 "traddr": "192.168.100.8", 00:19:28.805 "trsvcid": "33930" 00:19:28.805 }, 00:19:28.805 "auth": { 00:19:28.805 "state": "completed", 00:19:28.805 "digest": "sha256", 00:19:28.805 "dhgroup": "ffdhe8192" 00:19:28.805 } 00:19:28.805 } 00:19:28.805 ]' 00:19:28.805 21:49:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.805 21:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.805 21:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.805 21:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.063 21:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.063 21:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.063 21:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.063 21:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.063 21:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:29.063 21:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:29.633 21:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.891 21:49:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:29.891 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.891 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.891 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.891 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:29.891 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.891 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.891 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:29.891 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.148 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:30.148 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.148 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:30.148 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:30.148 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:30.148 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.148 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.148 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.148 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.148 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.148 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.148 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.149 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.406 00:19:30.406 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.406 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.406 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.406 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.406 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.406 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.406 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.664 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.664 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.664 { 00:19:30.664 "cntlid": 49, 00:19:30.664 "qid": 0, 00:19:30.664 "state": "enabled", 00:19:30.664 "thread": "nvmf_tgt_poll_group_000", 00:19:30.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:30.664 "listen_address": { 00:19:30.664 "trtype": "RDMA", 00:19:30.664 "adrfam": "IPv4", 00:19:30.664 "traddr": "192.168.100.8", 00:19:30.664 "trsvcid": "4420" 00:19:30.664 }, 00:19:30.664 "peer_address": { 00:19:30.664 "trtype": "RDMA", 00:19:30.664 "adrfam": "IPv4", 00:19:30.664 "traddr": "192.168.100.8", 00:19:30.664 "trsvcid": "44605" 00:19:30.664 }, 00:19:30.664 "auth": { 00:19:30.664 "state": "completed", 00:19:30.664 "digest": "sha384", 00:19:30.664 "dhgroup": "null" 00:19:30.664 } 00:19:30.664 } 00:19:30.664 ]' 00:19:30.664 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.664 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.664 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.664 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:30.664 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.664 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.664 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.664 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.922 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:30.923 21:49:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:31.489 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.489 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:31.489 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.489 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.489 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.489 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.489 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:31.489 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:31.747 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:31.747 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.747 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:31.747 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:31.747 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:31.747 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.747 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.747 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.747 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.747 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.747 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.747 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.747 21:49:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.005 00:19:32.005 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.005 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.005 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.264 { 00:19:32.264 "cntlid": 51, 00:19:32.264 "qid": 0, 00:19:32.264 "state": "enabled", 00:19:32.264 "thread": "nvmf_tgt_poll_group_000", 00:19:32.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:32.264 "listen_address": { 00:19:32.264 "trtype": "RDMA", 00:19:32.264 "adrfam": "IPv4", 00:19:32.264 "traddr": "192.168.100.8", 00:19:32.264 "trsvcid": "4420" 00:19:32.264 }, 00:19:32.264 "peer_address": { 00:19:32.264 "trtype": "RDMA", 00:19:32.264 "adrfam": "IPv4", 00:19:32.264 "traddr": "192.168.100.8", 00:19:32.264 "trsvcid": "48357" 00:19:32.264 }, 00:19:32.264 "auth": { 00:19:32.264 "state": "completed", 00:19:32.264 "digest": "sha384", 00:19:32.264 "dhgroup": "null" 00:19:32.264 } 00:19:32.264 } 00:19:32.264 ]' 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.264 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.522 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:32.523 21:49:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:33.091 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.348 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:33.348 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.348 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.348 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.348 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.348 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.348 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.606 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:33.606 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.606 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:33.606 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:33.606 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:33.606 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.606 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.606 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.606 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.606 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.606 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.606 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.606 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.865 00:19:33.865 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.865 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.865 21:49:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.865 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.865 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.865 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.865 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.865 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.865 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.865 { 00:19:33.865 "cntlid": 53, 00:19:33.865 "qid": 0, 00:19:33.865 "state": "enabled", 00:19:33.865 "thread": "nvmf_tgt_poll_group_000", 00:19:33.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:33.865 "listen_address": { 00:19:33.865 "trtype": "RDMA", 00:19:33.865 "adrfam": "IPv4", 00:19:33.865 "traddr": "192.168.100.8", 00:19:33.865 "trsvcid": "4420" 00:19:33.865 }, 00:19:33.865 "peer_address": { 00:19:33.865 "trtype": "RDMA", 00:19:33.865 "adrfam": "IPv4", 00:19:33.865 "traddr": "192.168.100.8", 00:19:33.865 "trsvcid": "34536" 00:19:33.865 }, 00:19:33.865 "auth": { 00:19:33.865 "state": "completed", 00:19:33.865 "digest": "sha384", 00:19:33.865 "dhgroup": "null" 00:19:33.865 } 00:19:33.865 } 00:19:33.865 ]' 00:19:33.865 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.123 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.123 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.123 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:34.123 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.123 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.123 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.123 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.381 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:34.381 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:34.948 21:49:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.948 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:34.948 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.948 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.948 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.948 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.948 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:34.948 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.206 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:35.206 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.206 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:35.206 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:35.206 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:35.206 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.207 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:35.207 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.207 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.207 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.207 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:35.207 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.207 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.464 00:19:35.464 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.464 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.464 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.722 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.722 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.722 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.722 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.722 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.722 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.722 { 00:19:35.722 "cntlid": 55, 00:19:35.722 "qid": 0, 00:19:35.722 "state": "enabled", 00:19:35.722 "thread": "nvmf_tgt_poll_group_000", 00:19:35.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:35.722 "listen_address": { 00:19:35.722 "trtype": "RDMA", 00:19:35.722 "adrfam": "IPv4", 00:19:35.722 "traddr": "192.168.100.8", 00:19:35.723 "trsvcid": "4420" 00:19:35.723 }, 00:19:35.723 "peer_address": { 00:19:35.723 "trtype": "RDMA", 00:19:35.723 "adrfam": "IPv4", 00:19:35.723 "traddr": "192.168.100.8", 00:19:35.723 "trsvcid": "35135" 00:19:35.723 }, 00:19:35.723 "auth": { 00:19:35.723 "state": "completed", 00:19:35.723 "digest": "sha384", 00:19:35.723 "dhgroup": "null" 00:19:35.723 } 00:19:35.723 } 00:19:35.723 ]' 00:19:35.723 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.723 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.723 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.723 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:35.723 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.723 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.723 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.723 21:49:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.981 21:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:35.981 21:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:36.551 21:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.939 21:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:36.939 21:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.939 21:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.939 21:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.939 21:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.939 21:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.939 21:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:36.939 21:49:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:36.939 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:36.939 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.939 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:36.939 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:36.939 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:36.939 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.939 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.939 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.939 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.939 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.939 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.939 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.939 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.196 00:19:37.197 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.197 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.197 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.455 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.455 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.455 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.455 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.455 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.455 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.455 { 00:19:37.455 "cntlid": 57, 00:19:37.455 "qid": 0, 00:19:37.455 "state": "enabled", 00:19:37.455 "thread": "nvmf_tgt_poll_group_000", 00:19:37.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:37.455 "listen_address": { 00:19:37.455 "trtype": "RDMA", 00:19:37.455 "adrfam": "IPv4", 00:19:37.455 "traddr": "192.168.100.8", 00:19:37.455 "trsvcid": "4420" 00:19:37.455 }, 00:19:37.455 "peer_address": { 00:19:37.455 "trtype": "RDMA", 00:19:37.455 "adrfam": "IPv4", 00:19:37.455 "traddr": "192.168.100.8", 00:19:37.455 "trsvcid": "39128" 00:19:37.455 }, 00:19:37.455 "auth": { 00:19:37.455 "state": "completed", 00:19:37.455 "digest": "sha384", 00:19:37.455 "dhgroup": "ffdhe2048" 00:19:37.455 } 00:19:37.455 } 00:19:37.455 ]' 00:19:37.455 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.455 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.455 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.455 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.455 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.455 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.455 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.456 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.714 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:37.714 21:49:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:38.280 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.539 21:49:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.797 00:19:38.798 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.798 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.798 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.055 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.055 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.055 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.055 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.055 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.055 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.055 { 00:19:39.055 "cntlid": 59, 00:19:39.055 "qid": 0, 00:19:39.055 "state": "enabled", 00:19:39.055 "thread": "nvmf_tgt_poll_group_000", 00:19:39.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:39.055 "listen_address": { 00:19:39.055 "trtype": "RDMA", 00:19:39.055 "adrfam": "IPv4", 00:19:39.055 "traddr": "192.168.100.8", 00:19:39.055 "trsvcid": "4420" 00:19:39.055 }, 00:19:39.055 "peer_address": { 00:19:39.055 "trtype": "RDMA", 00:19:39.055 "adrfam": "IPv4", 00:19:39.055 "traddr": "192.168.100.8", 00:19:39.055 "trsvcid": "39906" 00:19:39.055 }, 00:19:39.055 "auth": { 00:19:39.055 "state": "completed", 00:19:39.055 "digest": "sha384", 00:19:39.055 "dhgroup": "ffdhe2048" 00:19:39.055 } 00:19:39.055 } 00:19:39.055 ]' 00:19:39.055 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.055 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.055 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.055 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:39.312 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.313 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.313 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.313 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.313 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:39.313 21:49:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.246 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.505 00:19:40.505 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.505 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.763 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.763 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.763 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.763 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.763 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.763 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.763 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.763 { 00:19:40.763 "cntlid": 61, 00:19:40.763 "qid": 0, 00:19:40.763 "state": "enabled", 00:19:40.763 "thread": "nvmf_tgt_poll_group_000", 00:19:40.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:40.763 "listen_address": { 00:19:40.763 "trtype": "RDMA", 00:19:40.763 "adrfam": "IPv4", 00:19:40.763 "traddr": "192.168.100.8", 00:19:40.763 "trsvcid": "4420" 00:19:40.763 }, 00:19:40.763 "peer_address": { 00:19:40.763 "trtype": "RDMA", 00:19:40.763 "adrfam": "IPv4", 00:19:40.763 "traddr": "192.168.100.8", 00:19:40.763 "trsvcid": "33398" 00:19:40.763 }, 00:19:40.763 "auth": { 00:19:40.763 "state": "completed", 00:19:40.763 "digest": "sha384", 00:19:40.763 "dhgroup": "ffdhe2048" 00:19:40.763 } 00:19:40.763 } 00:19:40.763 ]' 00:19:40.763 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.763 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.763 21:49:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.022 21:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:41.022 21:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.022 21:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.022 21:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.022 21:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.022 21:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:41.022 21:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:41.957 21:49:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.957 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:41.957 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.957 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.957 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.957 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.957 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.957 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.215 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:42.215 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.215 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:42.215 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:42.215 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:42.215 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.215 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:42.215 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.215 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.215 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.215 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:42.215 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.215 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.474 00:19:42.474 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.474 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.474 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.474 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.474 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.474 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.474 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.474 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.474 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.474 { 00:19:42.474 "cntlid": 63, 00:19:42.474 "qid": 0, 00:19:42.474 "state": "enabled", 00:19:42.474 "thread": "nvmf_tgt_poll_group_000", 00:19:42.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:42.474 "listen_address": { 00:19:42.474 "trtype": "RDMA", 00:19:42.474 "adrfam": "IPv4", 00:19:42.474 "traddr": "192.168.100.8", 00:19:42.474 "trsvcid": "4420" 00:19:42.474 }, 00:19:42.474 "peer_address": { 00:19:42.474 "trtype": "RDMA", 00:19:42.474 "adrfam": "IPv4", 00:19:42.474 "traddr": "192.168.100.8", 00:19:42.474 "trsvcid": "55477" 00:19:42.474 }, 00:19:42.474 "auth": { 00:19:42.474 "state": "completed", 00:19:42.474 "digest": "sha384", 00:19:42.474 "dhgroup": "ffdhe2048" 00:19:42.474 } 00:19:42.474 } 00:19:42.474 ]' 00:19:42.474 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.733 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.733 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.733 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.733 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.733 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.733 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.733 21:49:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.992 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:42.992 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:43.557 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.557 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:43.557 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.557 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.557 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.557 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.557 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.557 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:43.557 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:43.815 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:43.815 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.815 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:43.815 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:43.815 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:43.815 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.815 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.815 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.815 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.815 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.815 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.815 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.815 21:49:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.073 00:19:44.073 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.073 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.073 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.331 { 00:19:44.331 "cntlid": 65, 00:19:44.331 "qid": 0, 00:19:44.331 "state": "enabled", 00:19:44.331 "thread": "nvmf_tgt_poll_group_000", 00:19:44.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:44.331 "listen_address": { 00:19:44.331 "trtype": "RDMA", 00:19:44.331 "adrfam": "IPv4", 00:19:44.331 "traddr": "192.168.100.8", 00:19:44.331 "trsvcid": "4420" 00:19:44.331 }, 00:19:44.331 "peer_address": { 00:19:44.331 "trtype": "RDMA", 00:19:44.331 "adrfam": "IPv4", 00:19:44.331 "traddr": "192.168.100.8", 00:19:44.331 "trsvcid": "58755" 00:19:44.331 }, 00:19:44.331 "auth": { 00:19:44.331 "state": "completed", 00:19:44.331 "digest": "sha384", 00:19:44.331 "dhgroup": "ffdhe3072" 00:19:44.331 } 00:19:44.331 } 00:19:44.331 ]' 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.331 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.589 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:44.589 21:49:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:45.156 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.414 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:45.414 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.414 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.414 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.414 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.414 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:45.414 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:45.673 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:45.673 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.673 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:45.673 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:45.673 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:45.673 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.673 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.673 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.673 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.673 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.673 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.673 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.673 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.931 00:19:45.931 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.931 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.931 21:49:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.931 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.932 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.932 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.932 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.932 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.932 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.932 { 00:19:45.932 "cntlid": 67, 00:19:45.932 "qid": 0, 00:19:45.932 "state": "enabled", 00:19:45.932 "thread": "nvmf_tgt_poll_group_000", 00:19:45.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:45.932 "listen_address": { 00:19:45.932 "trtype": "RDMA", 00:19:45.932 "adrfam": "IPv4", 00:19:45.932 "traddr": "192.168.100.8", 00:19:45.932 "trsvcid": "4420" 00:19:45.932 }, 00:19:45.932 "peer_address": { 00:19:45.932 "trtype": "RDMA", 00:19:45.932 "adrfam": "IPv4", 00:19:45.932 "traddr": "192.168.100.8", 00:19:45.932 "trsvcid": "34696" 00:19:45.932 }, 00:19:45.932 "auth": { 00:19:45.932 "state": "completed", 00:19:45.932 "digest": "sha384", 00:19:45.932 "dhgroup": "ffdhe3072" 00:19:45.932 } 00:19:45.932 } 00:19:45.932 ]' 00:19:45.932 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.190 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.190 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.190 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.190 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.190 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.190 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.190 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.447 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:46.447 21:49:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:47.012 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.012 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:47.012 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.012 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.012 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.012 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.012 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:47.012 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:47.270 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:47.270 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.270 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:47.270 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:47.271 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:47.271 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.271 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.271 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.271 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.271 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.271 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.271 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.271 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.529 00:19:47.529 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.529 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.529 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.787 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.787 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.787 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.787 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.787 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.787 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.787 { 00:19:47.787 "cntlid": 69, 00:19:47.787 "qid": 0, 00:19:47.787 "state": "enabled", 00:19:47.787 "thread": "nvmf_tgt_poll_group_000", 00:19:47.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:47.787 "listen_address": { 00:19:47.787 "trtype": "RDMA", 00:19:47.787 "adrfam": "IPv4", 00:19:47.787 "traddr": "192.168.100.8", 00:19:47.787 "trsvcid": "4420" 00:19:47.787 }, 00:19:47.787 "peer_address": { 00:19:47.787 "trtype": "RDMA", 00:19:47.787 "adrfam": "IPv4", 00:19:47.787 "traddr": "192.168.100.8", 00:19:47.787 "trsvcid": "41176" 00:19:47.787 }, 00:19:47.787 "auth": { 00:19:47.787 "state": "completed", 00:19:47.787 "digest": "sha384", 00:19:47.787 "dhgroup": "ffdhe3072" 00:19:47.787 } 00:19:47.787 } 00:19:47.787 ]' 00:19:47.787 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.787 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.787 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.787 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:47.787 21:49:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.787 21:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.787 21:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.787 21:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.045 21:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:48.045 21:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:48.611 21:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.869 21:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:48.869 21:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.869 21:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.869 21:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.869 21:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.869 21:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:48.869 21:49:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.127 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:49.127 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.127 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:49.127 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:49.127 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:49.127 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.127 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:49.127 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.127 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.127 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.127 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:49.127 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.127 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.385 00:19:49.385 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.385 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.385 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.643 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.643 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.643 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.643 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.643 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.643 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.643 { 00:19:49.643 "cntlid": 71, 00:19:49.643 "qid": 0, 00:19:49.643 "state": "enabled", 00:19:49.643 "thread": "nvmf_tgt_poll_group_000", 00:19:49.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:49.643 "listen_address": { 00:19:49.643 "trtype": "RDMA", 00:19:49.643 "adrfam": "IPv4", 00:19:49.643 "traddr": "192.168.100.8", 00:19:49.643 "trsvcid": "4420" 00:19:49.643 }, 00:19:49.643 "peer_address": { 00:19:49.643 "trtype": "RDMA", 00:19:49.643 "adrfam": "IPv4", 00:19:49.643 "traddr": "192.168.100.8", 00:19:49.643 "trsvcid": "57355" 00:19:49.643 }, 00:19:49.643 "auth": { 00:19:49.643 "state": "completed", 00:19:49.643 "digest": "sha384", 00:19:49.643 "dhgroup": "ffdhe3072" 00:19:49.643 } 00:19:49.643 } 00:19:49.643 ]' 00:19:49.643 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.643 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.643 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.643 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:49.644 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.644 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.644 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.644 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.902 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:49.902 21:49:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:50.469 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.469 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:50.469 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.469 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.469 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.469 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.469 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.469 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.469 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.728 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:50.728 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.728 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:50.728 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:50.728 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:50.728 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.728 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.728 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.728 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.728 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.728 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.728 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.728 21:49:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.986 00:19:50.986 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.986 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.986 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.244 { 00:19:51.244 "cntlid": 73, 00:19:51.244 "qid": 0, 00:19:51.244 "state": "enabled", 00:19:51.244 "thread": "nvmf_tgt_poll_group_000", 00:19:51.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:51.244 "listen_address": { 00:19:51.244 "trtype": "RDMA", 00:19:51.244 "adrfam": "IPv4", 00:19:51.244 "traddr": "192.168.100.8", 00:19:51.244 "trsvcid": "4420" 00:19:51.244 }, 00:19:51.244 "peer_address": { 00:19:51.244 "trtype": "RDMA", 00:19:51.244 "adrfam": "IPv4", 00:19:51.244 "traddr": "192.168.100.8", 00:19:51.244 "trsvcid": "60323" 00:19:51.244 }, 00:19:51.244 "auth": { 00:19:51.244 "state": "completed", 00:19:51.244 "digest": "sha384", 00:19:51.244 "dhgroup": "ffdhe4096" 00:19:51.244 } 00:19:51.244 } 00:19:51.244 ]' 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.244 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.502 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:51.502 21:49:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:52.068 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.327 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:52.327 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.327 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.327 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.327 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.327 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:52.327 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:52.586 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:52.586 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.586 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:52.586 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:52.586 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:52.586 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.586 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.586 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.586 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.586 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.586 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.586 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.586 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.844 00:19:52.844 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.844 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.844 21:49:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.102 { 00:19:53.102 "cntlid": 75, 00:19:53.102 "qid": 0, 00:19:53.102 "state": "enabled", 00:19:53.102 "thread": "nvmf_tgt_poll_group_000", 00:19:53.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:53.102 "listen_address": { 00:19:53.102 "trtype": "RDMA", 00:19:53.102 "adrfam": "IPv4", 00:19:53.102 "traddr": "192.168.100.8", 00:19:53.102 "trsvcid": "4420" 00:19:53.102 }, 00:19:53.102 "peer_address": { 00:19:53.102 "trtype": "RDMA", 00:19:53.102 "adrfam": "IPv4", 00:19:53.102 "traddr": "192.168.100.8", 00:19:53.102 "trsvcid": "59331" 00:19:53.102 }, 00:19:53.102 "auth": { 00:19:53.102 "state": "completed", 00:19:53.102 "digest": "sha384", 00:19:53.102 "dhgroup": "ffdhe4096" 00:19:53.102 } 00:19:53.102 } 00:19:53.102 ]' 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.102 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.360 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:53.360 21:49:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:19:53.927 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.186 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.446 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.446 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.446 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.446 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.705 00:19:54.705 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.705 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.705 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.705 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.705 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.705 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.705 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.705 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.705 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.705 { 00:19:54.705 "cntlid": 77, 00:19:54.705 "qid": 0, 00:19:54.705 "state": "enabled", 00:19:54.705 "thread": "nvmf_tgt_poll_group_000", 00:19:54.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:54.705 "listen_address": { 00:19:54.705 "trtype": "RDMA", 00:19:54.705 "adrfam": "IPv4", 00:19:54.705 "traddr": "192.168.100.8", 00:19:54.705 "trsvcid": "4420" 00:19:54.705 }, 00:19:54.705 "peer_address": { 00:19:54.705 "trtype": "RDMA", 00:19:54.705 "adrfam": "IPv4", 00:19:54.705 "traddr": "192.168.100.8", 00:19:54.705 "trsvcid": "45397" 00:19:54.705 }, 00:19:54.706 "auth": { 00:19:54.706 "state": "completed", 00:19:54.706 "digest": "sha384", 00:19:54.706 "dhgroup": "ffdhe4096" 00:19:54.706 } 00:19:54.706 } 00:19:54.706 ]' 00:19:54.706 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.965 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.965 21:49:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.965 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:54.965 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.965 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.965 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.965 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.224 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:55.224 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:19:55.791 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.791 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:55.791 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.791 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.791 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.791 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.791 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.791 21:49:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:56.051 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:56.051 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.051 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:56.051 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:56.051 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:56.051 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.051 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:56.051 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.051 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.051 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.051 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:56.051 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.051 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.310 00:19:56.310 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.310 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.310 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.569 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.569 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.569 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.569 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.570 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.570 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.570 { 00:19:56.570 "cntlid": 79, 00:19:56.570 "qid": 0, 00:19:56.570 "state": "enabled", 00:19:56.570 "thread": "nvmf_tgt_poll_group_000", 00:19:56.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:56.570 "listen_address": { 00:19:56.570 "trtype": "RDMA", 00:19:56.570 "adrfam": "IPv4", 00:19:56.570 "traddr": "192.168.100.8", 00:19:56.570 "trsvcid": "4420" 00:19:56.570 }, 00:19:56.570 "peer_address": { 00:19:56.570 "trtype": "RDMA", 00:19:56.570 "adrfam": "IPv4", 00:19:56.570 "traddr": "192.168.100.8", 00:19:56.570 "trsvcid": "51440" 00:19:56.570 }, 00:19:56.570 "auth": { 00:19:56.570 "state": "completed", 00:19:56.570 "digest": "sha384", 00:19:56.570 "dhgroup": "ffdhe4096" 00:19:56.570 } 00:19:56.570 } 00:19:56.570 ]' 00:19:56.570 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.570 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.570 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.570 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:56.570 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.570 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.570 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.570 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.829 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:56.829 21:49:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.765 21:49:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.331 00:19:58.331 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.331 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.331 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.331 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.331 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.331 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.331 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.331 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.331 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.331 { 00:19:58.331 "cntlid": 81, 00:19:58.331 "qid": 0, 00:19:58.331 "state": "enabled", 00:19:58.331 "thread": "nvmf_tgt_poll_group_000", 00:19:58.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:58.331 "listen_address": { 00:19:58.331 "trtype": "RDMA", 00:19:58.331 "adrfam": "IPv4", 00:19:58.331 "traddr": "192.168.100.8", 00:19:58.331 "trsvcid": "4420" 00:19:58.331 }, 00:19:58.331 "peer_address": { 00:19:58.331 "trtype": "RDMA", 00:19:58.331 "adrfam": "IPv4", 00:19:58.331 "traddr": "192.168.100.8", 00:19:58.331 "trsvcid": "55139" 00:19:58.331 }, 00:19:58.331 "auth": { 00:19:58.331 "state": "completed", 00:19:58.331 "digest": "sha384", 00:19:58.331 "dhgroup": "ffdhe6144" 00:19:58.331 } 00:19:58.331 } 00:19:58.331 ]' 00:19:58.331 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.331 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.331 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.589 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.589 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.589 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.589 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.589 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.847 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:58.847 21:49:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:19:59.412 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.412 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:59.412 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.412 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.412 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.412 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.412 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:59.412 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:59.670 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:59.670 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.670 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:59.670 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:59.670 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:59.670 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.670 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.670 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.670 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.670 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.670 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.670 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.670 21:49:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.927 00:19:59.927 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.927 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.927 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.185 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.185 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.185 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.185 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.185 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.185 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.185 { 00:20:00.185 "cntlid": 83, 00:20:00.185 "qid": 0, 00:20:00.185 "state": "enabled", 00:20:00.185 "thread": "nvmf_tgt_poll_group_000", 00:20:00.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:00.185 "listen_address": { 00:20:00.185 "trtype": "RDMA", 00:20:00.185 "adrfam": "IPv4", 00:20:00.185 "traddr": "192.168.100.8", 00:20:00.185 "trsvcid": "4420" 00:20:00.185 }, 00:20:00.185 "peer_address": { 00:20:00.185 "trtype": "RDMA", 00:20:00.185 "adrfam": "IPv4", 00:20:00.185 "traddr": "192.168.100.8", 00:20:00.185 "trsvcid": "46692" 00:20:00.185 }, 00:20:00.185 "auth": { 00:20:00.185 "state": "completed", 00:20:00.185 "digest": "sha384", 00:20:00.185 "dhgroup": "ffdhe6144" 00:20:00.185 } 00:20:00.185 } 00:20:00.185 ]' 00:20:00.185 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.185 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.185 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.516 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.516 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.516 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.516 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.516 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.516 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:00.516 21:49:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:01.112 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.370 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:01.370 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.371 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.938 00:20:01.939 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.939 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.939 21:49:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.939 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.939 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.939 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.939 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.939 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.939 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.939 { 00:20:01.939 "cntlid": 85, 00:20:01.939 "qid": 0, 00:20:01.939 "state": "enabled", 00:20:01.939 "thread": "nvmf_tgt_poll_group_000", 00:20:01.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:01.939 "listen_address": { 00:20:01.939 "trtype": "RDMA", 00:20:01.939 "adrfam": "IPv4", 00:20:01.939 "traddr": "192.168.100.8", 00:20:01.939 "trsvcid": "4420" 00:20:01.939 }, 00:20:01.939 "peer_address": { 00:20:01.939 "trtype": "RDMA", 00:20:01.939 "adrfam": "IPv4", 00:20:01.939 "traddr": "192.168.100.8", 00:20:01.939 "trsvcid": "40696" 00:20:01.939 }, 00:20:01.939 "auth": { 00:20:01.939 "state": "completed", 00:20:01.939 "digest": "sha384", 00:20:01.939 "dhgroup": "ffdhe6144" 00:20:01.939 } 00:20:01.939 } 00:20:01.939 ]' 00:20:01.939 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.198 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.198 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.198 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.198 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.198 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.198 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.198 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.457 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:02.457 21:49:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:03.025 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.025 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:03.025 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.025 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.025 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.025 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.025 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.026 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.283 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:03.283 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.283 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.283 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:03.283 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:03.283 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.283 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:03.283 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.283 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.283 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.284 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:03.284 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.284 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.542 00:20:03.542 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.542 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.542 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.801 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.801 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.802 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.802 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.802 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.802 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.802 { 00:20:03.802 "cntlid": 87, 00:20:03.802 "qid": 0, 00:20:03.802 "state": "enabled", 00:20:03.802 "thread": "nvmf_tgt_poll_group_000", 00:20:03.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:03.802 "listen_address": { 00:20:03.802 "trtype": "RDMA", 00:20:03.802 "adrfam": "IPv4", 00:20:03.802 "traddr": "192.168.100.8", 00:20:03.802 "trsvcid": "4420" 00:20:03.802 }, 00:20:03.802 "peer_address": { 00:20:03.802 "trtype": "RDMA", 00:20:03.802 "adrfam": "IPv4", 00:20:03.802 "traddr": "192.168.100.8", 00:20:03.802 "trsvcid": "37677" 00:20:03.802 }, 00:20:03.802 "auth": { 00:20:03.802 "state": "completed", 00:20:03.802 "digest": "sha384", 00:20:03.802 "dhgroup": "ffdhe6144" 00:20:03.802 } 00:20:03.802 } 00:20:03.802 ]' 00:20:03.802 21:49:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.802 21:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.802 21:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.061 21:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.061 21:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.061 21:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.061 21:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.061 21:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.061 21:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:04.061 21:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:04.999 21:49:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.000 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.568 00:20:05.568 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.568 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.568 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.826 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.826 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.826 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.826 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.826 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.826 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.826 { 00:20:05.826 "cntlid": 89, 00:20:05.826 "qid": 0, 00:20:05.826 "state": "enabled", 00:20:05.826 "thread": "nvmf_tgt_poll_group_000", 00:20:05.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:05.826 "listen_address": { 00:20:05.826 "trtype": "RDMA", 00:20:05.827 "adrfam": "IPv4", 00:20:05.827 "traddr": "192.168.100.8", 00:20:05.827 "trsvcid": "4420" 00:20:05.827 }, 00:20:05.827 "peer_address": { 00:20:05.827 "trtype": "RDMA", 00:20:05.827 "adrfam": "IPv4", 00:20:05.827 "traddr": "192.168.100.8", 00:20:05.827 "trsvcid": "41750" 00:20:05.827 }, 00:20:05.827 "auth": { 00:20:05.827 "state": "completed", 00:20:05.827 "digest": "sha384", 00:20:05.827 "dhgroup": "ffdhe8192" 00:20:05.827 } 00:20:05.827 } 00:20:05.827 ]' 00:20:05.827 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.827 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.827 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.827 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.827 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.827 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.827 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.827 21:49:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.086 21:49:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:06.086 21:49:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:06.654 21:49:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.914 21:49:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:06.914 21:49:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.914 21:49:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.914 21:49:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.914 21:49:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.914 21:49:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.914 21:49:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.914 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:06.914 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.914 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:06.914 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:06.914 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:06.914 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.914 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.914 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.914 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.914 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.914 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.914 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.914 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.482 00:20:07.482 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.482 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.482 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.741 { 00:20:07.741 "cntlid": 91, 00:20:07.741 "qid": 0, 00:20:07.741 "state": "enabled", 00:20:07.741 "thread": "nvmf_tgt_poll_group_000", 00:20:07.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:07.741 "listen_address": { 00:20:07.741 "trtype": "RDMA", 00:20:07.741 "adrfam": "IPv4", 00:20:07.741 "traddr": "192.168.100.8", 00:20:07.741 "trsvcid": "4420" 00:20:07.741 }, 00:20:07.741 "peer_address": { 00:20:07.741 "trtype": "RDMA", 00:20:07.741 "adrfam": "IPv4", 00:20:07.741 "traddr": "192.168.100.8", 00:20:07.741 "trsvcid": "36025" 00:20:07.741 }, 00:20:07.741 "auth": { 00:20:07.741 "state": "completed", 00:20:07.741 "digest": "sha384", 00:20:07.741 "dhgroup": "ffdhe8192" 00:20:07.741 } 00:20:07.741 } 00:20:07.741 ]' 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.741 21:49:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.001 21:49:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:08.001 21:49:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:08.567 21:49:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.825 21:49:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:08.825 21:49:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.825 21:49:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.825 21:49:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.825 21:49:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.825 21:49:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:08.825 21:49:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:09.083 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:09.083 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.083 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.083 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:09.083 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:09.083 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.083 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.083 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.083 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.083 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.083 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.083 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.084 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.342 00:20:09.342 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.342 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.342 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.600 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.600 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.600 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.600 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.600 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.600 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.600 { 00:20:09.600 "cntlid": 93, 00:20:09.600 "qid": 0, 00:20:09.600 "state": "enabled", 00:20:09.600 "thread": "nvmf_tgt_poll_group_000", 00:20:09.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:09.600 "listen_address": { 00:20:09.600 "trtype": "RDMA", 00:20:09.600 "adrfam": "IPv4", 00:20:09.600 "traddr": "192.168.100.8", 00:20:09.600 "trsvcid": "4420" 00:20:09.600 }, 00:20:09.600 "peer_address": { 00:20:09.600 "trtype": "RDMA", 00:20:09.600 "adrfam": "IPv4", 00:20:09.600 "traddr": "192.168.100.8", 00:20:09.600 "trsvcid": "49160" 00:20:09.600 }, 00:20:09.600 "auth": { 00:20:09.600 "state": "completed", 00:20:09.600 "digest": "sha384", 00:20:09.600 "dhgroup": "ffdhe8192" 00:20:09.600 } 00:20:09.600 } 00:20:09.600 ]' 00:20:09.600 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.600 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.600 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.858 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.858 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.858 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.858 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.858 21:49:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.858 21:49:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:09.858 21:49:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:10.792 21:49:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.792 21:49:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:10.792 21:49:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.792 21:49:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.792 21:49:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.792 21:49:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.792 21:49:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:10.792 21:49:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:10.792 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:10.792 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.792 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.792 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:10.792 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:10.792 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.792 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:10.792 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.792 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.050 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.050 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.050 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.050 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.308 00:20:11.308 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.308 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.308 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.567 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.567 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.567 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.567 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.567 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.567 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.567 { 00:20:11.567 "cntlid": 95, 00:20:11.567 "qid": 0, 00:20:11.567 "state": "enabled", 00:20:11.567 "thread": "nvmf_tgt_poll_group_000", 00:20:11.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:11.567 "listen_address": { 00:20:11.567 "trtype": "RDMA", 00:20:11.567 "adrfam": "IPv4", 00:20:11.567 "traddr": "192.168.100.8", 00:20:11.567 "trsvcid": "4420" 00:20:11.567 }, 00:20:11.567 "peer_address": { 00:20:11.567 "trtype": "RDMA", 00:20:11.567 "adrfam": "IPv4", 00:20:11.567 "traddr": "192.168.100.8", 00:20:11.567 "trsvcid": "43663" 00:20:11.567 }, 00:20:11.567 "auth": { 00:20:11.567 "state": "completed", 00:20:11.567 "digest": "sha384", 00:20:11.567 "dhgroup": "ffdhe8192" 00:20:11.567 } 00:20:11.567 } 00:20:11.567 ]' 00:20:11.567 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.567 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.567 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.826 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.826 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.826 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.826 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.826 21:49:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.826 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:11.826 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.760 21:49:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.017 00:20:13.017 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.017 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.017 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.275 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.275 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.275 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.275 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.275 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.275 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.275 { 00:20:13.275 "cntlid": 97, 00:20:13.275 "qid": 0, 00:20:13.275 "state": "enabled", 00:20:13.275 "thread": "nvmf_tgt_poll_group_000", 00:20:13.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:13.275 "listen_address": { 00:20:13.275 "trtype": "RDMA", 00:20:13.275 "adrfam": "IPv4", 00:20:13.275 "traddr": "192.168.100.8", 00:20:13.275 "trsvcid": "4420" 00:20:13.275 }, 00:20:13.275 "peer_address": { 00:20:13.275 "trtype": "RDMA", 00:20:13.275 "adrfam": "IPv4", 00:20:13.275 "traddr": "192.168.100.8", 00:20:13.275 "trsvcid": "43722" 00:20:13.275 }, 00:20:13.275 "auth": { 00:20:13.275 "state": "completed", 00:20:13.275 "digest": "sha512", 00:20:13.275 "dhgroup": "null" 00:20:13.275 } 00:20:13.275 } 00:20:13.275 ]' 00:20:13.275 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.275 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.276 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.534 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:13.534 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.534 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.534 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.534 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.792 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:13.792 21:49:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:14.359 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.359 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:14.359 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.359 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.359 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.359 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.359 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:14.359 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:14.617 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:14.617 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.617 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:14.617 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:14.617 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.617 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.617 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.617 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.617 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.617 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.617 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.617 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.617 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.875 00:20:14.875 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.875 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.875 21:49:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.134 { 00:20:15.134 "cntlid": 99, 00:20:15.134 "qid": 0, 00:20:15.134 "state": "enabled", 00:20:15.134 "thread": "nvmf_tgt_poll_group_000", 00:20:15.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:15.134 "listen_address": { 00:20:15.134 "trtype": "RDMA", 00:20:15.134 "adrfam": "IPv4", 00:20:15.134 "traddr": "192.168.100.8", 00:20:15.134 "trsvcid": "4420" 00:20:15.134 }, 00:20:15.134 "peer_address": { 00:20:15.134 "trtype": "RDMA", 00:20:15.134 "adrfam": "IPv4", 00:20:15.134 "traddr": "192.168.100.8", 00:20:15.134 "trsvcid": "59542" 00:20:15.134 }, 00:20:15.134 "auth": { 00:20:15.134 "state": "completed", 00:20:15.134 "digest": "sha512", 00:20:15.134 "dhgroup": "null" 00:20:15.134 } 00:20:15.134 } 00:20:15.134 ]' 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.134 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.393 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:15.393 21:49:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:15.959 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.218 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.476 00:20:16.476 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.476 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.476 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.735 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.735 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.735 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.735 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.735 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.735 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.735 { 00:20:16.735 "cntlid": 101, 00:20:16.735 "qid": 0, 00:20:16.735 "state": "enabled", 00:20:16.735 "thread": "nvmf_tgt_poll_group_000", 00:20:16.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:16.735 "listen_address": { 00:20:16.735 "trtype": "RDMA", 00:20:16.735 "adrfam": "IPv4", 00:20:16.735 "traddr": "192.168.100.8", 00:20:16.735 "trsvcid": "4420" 00:20:16.735 }, 00:20:16.735 "peer_address": { 00:20:16.735 "trtype": "RDMA", 00:20:16.735 "adrfam": "IPv4", 00:20:16.735 "traddr": "192.168.100.8", 00:20:16.735 "trsvcid": "56735" 00:20:16.735 }, 00:20:16.735 "auth": { 00:20:16.735 "state": "completed", 00:20:16.735 "digest": "sha512", 00:20:16.735 "dhgroup": "null" 00:20:16.735 } 00:20:16.735 } 00:20:16.735 ]' 00:20:16.735 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.735 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.735 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.735 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:16.735 21:49:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.994 21:49:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.994 21:49:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.994 21:49:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.994 21:49:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:16.994 21:49:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:17.929 21:49:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.929 21:49:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:17.929 21:49:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.929 21:49:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.929 21:49:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.929 21:49:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.929 21:49:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:17.929 21:49:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:17.929 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:17.929 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.929 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:17.929 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:17.929 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:17.929 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.929 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:17.929 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.929 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.929 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.929 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:17.929 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.929 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.188 00:20:18.188 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.188 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.188 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.446 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.446 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.446 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.446 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.446 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.446 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.446 { 00:20:18.446 "cntlid": 103, 00:20:18.446 "qid": 0, 00:20:18.446 "state": "enabled", 00:20:18.446 "thread": "nvmf_tgt_poll_group_000", 00:20:18.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:18.446 "listen_address": { 00:20:18.446 "trtype": "RDMA", 00:20:18.446 "adrfam": "IPv4", 00:20:18.446 "traddr": "192.168.100.8", 00:20:18.446 "trsvcid": "4420" 00:20:18.446 }, 00:20:18.446 "peer_address": { 00:20:18.446 "trtype": "RDMA", 00:20:18.446 "adrfam": "IPv4", 00:20:18.446 "traddr": "192.168.100.8", 00:20:18.446 "trsvcid": "53080" 00:20:18.446 }, 00:20:18.446 "auth": { 00:20:18.446 "state": "completed", 00:20:18.446 "digest": "sha512", 00:20:18.446 "dhgroup": "null" 00:20:18.447 } 00:20:18.447 } 00:20:18.447 ]' 00:20:18.447 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.447 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.447 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.705 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:18.705 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.705 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.705 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.705 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.705 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:18.705 21:49:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:19.640 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.640 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:19.640 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.640 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.640 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.640 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.640 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.640 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:19.640 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:19.640 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:19.898 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.898 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:19.898 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:19.898 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.898 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.898 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.898 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.898 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.898 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.898 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.898 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.898 21:49:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.898 00:20:20.157 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.157 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.157 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.157 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.157 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.157 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.157 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.157 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.157 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.157 { 00:20:20.157 "cntlid": 105, 00:20:20.157 "qid": 0, 00:20:20.157 "state": "enabled", 00:20:20.157 "thread": "nvmf_tgt_poll_group_000", 00:20:20.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:20.157 "listen_address": { 00:20:20.157 "trtype": "RDMA", 00:20:20.157 "adrfam": "IPv4", 00:20:20.157 "traddr": "192.168.100.8", 00:20:20.157 "trsvcid": "4420" 00:20:20.157 }, 00:20:20.157 "peer_address": { 00:20:20.157 "trtype": "RDMA", 00:20:20.157 "adrfam": "IPv4", 00:20:20.157 "traddr": "192.168.100.8", 00:20:20.157 "trsvcid": "34006" 00:20:20.157 }, 00:20:20.157 "auth": { 00:20:20.157 "state": "completed", 00:20:20.157 "digest": "sha512", 00:20:20.157 "dhgroup": "ffdhe2048" 00:20:20.157 } 00:20:20.157 } 00:20:20.157 ]' 00:20:20.157 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.157 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.157 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.415 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.415 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.415 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.415 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.415 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.686 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:20.686 21:49:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:21.254 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.254 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:21.254 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.254 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.254 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.254 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.254 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:21.254 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:21.513 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:21.513 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.513 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:21.513 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:21.513 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.513 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.513 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.513 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.513 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.513 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.513 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.513 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.513 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.772 00:20:21.772 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.772 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.772 21:49:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.030 { 00:20:22.030 "cntlid": 107, 00:20:22.030 "qid": 0, 00:20:22.030 "state": "enabled", 00:20:22.030 "thread": "nvmf_tgt_poll_group_000", 00:20:22.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:22.030 "listen_address": { 00:20:22.030 "trtype": "RDMA", 00:20:22.030 "adrfam": "IPv4", 00:20:22.030 "traddr": "192.168.100.8", 00:20:22.030 "trsvcid": "4420" 00:20:22.030 }, 00:20:22.030 "peer_address": { 00:20:22.030 "trtype": "RDMA", 00:20:22.030 "adrfam": "IPv4", 00:20:22.030 "traddr": "192.168.100.8", 00:20:22.030 "trsvcid": "32918" 00:20:22.030 }, 00:20:22.030 "auth": { 00:20:22.030 "state": "completed", 00:20:22.030 "digest": "sha512", 00:20:22.030 "dhgroup": "ffdhe2048" 00:20:22.030 } 00:20:22.030 } 00:20:22.030 ]' 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.030 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.288 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:22.288 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:22.855 21:49:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.113 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.371 00:20:23.372 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.372 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.372 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.630 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.630 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.630 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.630 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.630 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.630 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.630 { 00:20:23.630 "cntlid": 109, 00:20:23.630 "qid": 0, 00:20:23.630 "state": "enabled", 00:20:23.630 "thread": "nvmf_tgt_poll_group_000", 00:20:23.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:23.630 "listen_address": { 00:20:23.630 "trtype": "RDMA", 00:20:23.630 "adrfam": "IPv4", 00:20:23.630 "traddr": "192.168.100.8", 00:20:23.630 "trsvcid": "4420" 00:20:23.630 }, 00:20:23.630 "peer_address": { 00:20:23.630 "trtype": "RDMA", 00:20:23.630 "adrfam": "IPv4", 00:20:23.630 "traddr": "192.168.100.8", 00:20:23.630 "trsvcid": "60035" 00:20:23.630 }, 00:20:23.630 "auth": { 00:20:23.630 "state": "completed", 00:20:23.630 "digest": "sha512", 00:20:23.630 "dhgroup": "ffdhe2048" 00:20:23.630 } 00:20:23.630 } 00:20:23.630 ]' 00:20:23.630 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.630 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.630 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.630 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.630 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.889 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.889 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.889 21:49:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.889 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:23.889 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:24.457 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.794 21:49:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.794 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.794 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:24.794 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.794 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.067 00:20:25.067 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.067 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.067 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.326 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.326 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.326 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.326 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.326 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.326 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.326 { 00:20:25.326 "cntlid": 111, 00:20:25.326 "qid": 0, 00:20:25.327 "state": "enabled", 00:20:25.327 "thread": "nvmf_tgt_poll_group_000", 00:20:25.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:25.327 "listen_address": { 00:20:25.327 "trtype": "RDMA", 00:20:25.327 "adrfam": "IPv4", 00:20:25.327 "traddr": "192.168.100.8", 00:20:25.327 "trsvcid": "4420" 00:20:25.327 }, 00:20:25.327 "peer_address": { 00:20:25.327 "trtype": "RDMA", 00:20:25.327 "adrfam": "IPv4", 00:20:25.327 "traddr": "192.168.100.8", 00:20:25.327 "trsvcid": "32984" 00:20:25.327 }, 00:20:25.327 "auth": { 00:20:25.327 "state": "completed", 00:20:25.327 "digest": "sha512", 00:20:25.327 "dhgroup": "ffdhe2048" 00:20:25.327 } 00:20:25.327 } 00:20:25.327 ]' 00:20:25.327 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.327 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.327 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.327 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:25.327 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.586 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.586 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.586 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.586 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:25.586 21:49:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:26.153 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.412 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:26.412 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.412 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.412 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.412 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.412 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.412 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:26.412 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:26.672 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:26.672 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.672 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:26.672 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:26.672 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:26.672 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.672 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.672 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.672 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.672 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.672 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.672 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.672 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.931 00:20:26.931 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.931 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.931 21:49:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.931 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.931 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.931 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.931 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.931 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.931 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.931 { 00:20:26.931 "cntlid": 113, 00:20:26.931 "qid": 0, 00:20:26.931 "state": "enabled", 00:20:26.931 "thread": "nvmf_tgt_poll_group_000", 00:20:26.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:26.931 "listen_address": { 00:20:26.931 "trtype": "RDMA", 00:20:26.931 "adrfam": "IPv4", 00:20:26.931 "traddr": "192.168.100.8", 00:20:26.931 "trsvcid": "4420" 00:20:26.931 }, 00:20:26.931 "peer_address": { 00:20:26.931 "trtype": "RDMA", 00:20:26.931 "adrfam": "IPv4", 00:20:26.931 "traddr": "192.168.100.8", 00:20:26.931 "trsvcid": "45636" 00:20:26.931 }, 00:20:26.931 "auth": { 00:20:26.931 "state": "completed", 00:20:26.931 "digest": "sha512", 00:20:26.931 "dhgroup": "ffdhe3072" 00:20:26.931 } 00:20:26.931 } 00:20:26.931 ]' 00:20:27.191 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.191 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.191 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.191 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.191 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.191 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.191 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.191 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.450 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:27.450 21:49:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:28.018 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.018 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:28.018 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.018 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.018 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.018 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.018 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:28.018 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:28.277 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:28.277 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.277 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:28.277 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:28.277 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:28.277 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.277 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.277 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.277 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.277 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.277 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.277 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.277 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.535 00:20:28.535 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.535 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.535 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.795 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.795 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.795 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.795 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.795 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.795 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.795 { 00:20:28.795 "cntlid": 115, 00:20:28.795 "qid": 0, 00:20:28.795 "state": "enabled", 00:20:28.795 "thread": "nvmf_tgt_poll_group_000", 00:20:28.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:28.795 "listen_address": { 00:20:28.795 "trtype": "RDMA", 00:20:28.795 "adrfam": "IPv4", 00:20:28.795 "traddr": "192.168.100.8", 00:20:28.795 "trsvcid": "4420" 00:20:28.795 }, 00:20:28.795 "peer_address": { 00:20:28.795 "trtype": "RDMA", 00:20:28.795 "adrfam": "IPv4", 00:20:28.795 "traddr": "192.168.100.8", 00:20:28.795 "trsvcid": "39654" 00:20:28.795 }, 00:20:28.795 "auth": { 00:20:28.795 "state": "completed", 00:20:28.795 "digest": "sha512", 00:20:28.795 "dhgroup": "ffdhe3072" 00:20:28.795 } 00:20:28.795 } 00:20:28.795 ]' 00:20:28.795 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.795 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.795 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.795 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.795 21:50:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.795 21:50:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.795 21:50:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.795 21:50:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.054 21:50:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:29.054 21:50:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:29.622 21:50:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.881 21:50:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:29.881 21:50:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.881 21:50:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.881 21:50:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.881 21:50:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.881 21:50:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:29.881 21:50:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:29.881 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:29.881 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.881 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:29.881 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:29.881 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:29.881 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.882 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.882 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.882 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.882 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.882 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.882 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.882 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.141 00:20:30.399 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.399 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.399 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.399 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.399 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.399 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.399 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.399 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.399 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.399 { 00:20:30.399 "cntlid": 117, 00:20:30.399 "qid": 0, 00:20:30.399 "state": "enabled", 00:20:30.399 "thread": "nvmf_tgt_poll_group_000", 00:20:30.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:30.399 "listen_address": { 00:20:30.399 "trtype": "RDMA", 00:20:30.399 "adrfam": "IPv4", 00:20:30.399 "traddr": "192.168.100.8", 00:20:30.399 "trsvcid": "4420" 00:20:30.399 }, 00:20:30.399 "peer_address": { 00:20:30.399 "trtype": "RDMA", 00:20:30.399 "adrfam": "IPv4", 00:20:30.399 "traddr": "192.168.100.8", 00:20:30.399 "trsvcid": "49085" 00:20:30.399 }, 00:20:30.399 "auth": { 00:20:30.399 "state": "completed", 00:20:30.399 "digest": "sha512", 00:20:30.399 "dhgroup": "ffdhe3072" 00:20:30.399 } 00:20:30.399 } 00:20:30.399 ]' 00:20:30.399 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.399 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.399 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.658 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.658 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.658 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.658 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.658 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.917 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:30.917 21:50:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:31.485 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.485 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:31.485 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.485 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.485 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.485 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.485 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:31.485 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:31.744 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:31.744 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.744 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:31.744 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:31.744 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.744 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.744 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:31.744 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.744 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.744 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.744 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.744 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.744 21:50:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.003 00:20:32.003 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.003 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.003 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.262 { 00:20:32.262 "cntlid": 119, 00:20:32.262 "qid": 0, 00:20:32.262 "state": "enabled", 00:20:32.262 "thread": "nvmf_tgt_poll_group_000", 00:20:32.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:32.262 "listen_address": { 00:20:32.262 "trtype": "RDMA", 00:20:32.262 "adrfam": "IPv4", 00:20:32.262 "traddr": "192.168.100.8", 00:20:32.262 "trsvcid": "4420" 00:20:32.262 }, 00:20:32.262 "peer_address": { 00:20:32.262 "trtype": "RDMA", 00:20:32.262 "adrfam": "IPv4", 00:20:32.262 "traddr": "192.168.100.8", 00:20:32.262 "trsvcid": "50938" 00:20:32.262 }, 00:20:32.262 "auth": { 00:20:32.262 "state": "completed", 00:20:32.262 "digest": "sha512", 00:20:32.262 "dhgroup": "ffdhe3072" 00:20:32.262 } 00:20:32.262 } 00:20:32.262 ]' 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.262 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.521 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:32.521 21:50:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:33.087 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.346 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.912 00:20:33.912 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.912 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.912 21:50:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.912 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.912 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.912 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.912 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.912 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.912 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.912 { 00:20:33.912 "cntlid": 121, 00:20:33.912 "qid": 0, 00:20:33.912 "state": "enabled", 00:20:33.912 "thread": "nvmf_tgt_poll_group_000", 00:20:33.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:33.912 "listen_address": { 00:20:33.912 "trtype": "RDMA", 00:20:33.912 "adrfam": "IPv4", 00:20:33.912 "traddr": "192.168.100.8", 00:20:33.912 "trsvcid": "4420" 00:20:33.912 }, 00:20:33.912 "peer_address": { 00:20:33.912 "trtype": "RDMA", 00:20:33.912 "adrfam": "IPv4", 00:20:33.912 "traddr": "192.168.100.8", 00:20:33.912 "trsvcid": "57300" 00:20:33.912 }, 00:20:33.912 "auth": { 00:20:33.912 "state": "completed", 00:20:33.912 "digest": "sha512", 00:20:33.912 "dhgroup": "ffdhe4096" 00:20:33.912 } 00:20:33.912 } 00:20:33.912 ]' 00:20:33.912 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.912 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.912 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.170 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.170 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.170 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.170 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.170 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.430 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:34.430 21:50:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:34.998 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.998 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:34.998 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.998 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.998 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.998 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.998 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:34.998 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:35.258 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:35.258 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.258 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:35.258 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.258 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.258 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.258 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.258 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.258 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.258 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.258 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.258 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.258 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.517 00:20:35.517 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.517 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.517 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.775 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.775 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.775 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.775 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.775 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.775 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.775 { 00:20:35.775 "cntlid": 123, 00:20:35.775 "qid": 0, 00:20:35.775 "state": "enabled", 00:20:35.775 "thread": "nvmf_tgt_poll_group_000", 00:20:35.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:35.775 "listen_address": { 00:20:35.775 "trtype": "RDMA", 00:20:35.775 "adrfam": "IPv4", 00:20:35.775 "traddr": "192.168.100.8", 00:20:35.775 "trsvcid": "4420" 00:20:35.775 }, 00:20:35.775 "peer_address": { 00:20:35.775 "trtype": "RDMA", 00:20:35.775 "adrfam": "IPv4", 00:20:35.776 "traddr": "192.168.100.8", 00:20:35.776 "trsvcid": "46484" 00:20:35.776 }, 00:20:35.776 "auth": { 00:20:35.776 "state": "completed", 00:20:35.776 "digest": "sha512", 00:20:35.776 "dhgroup": "ffdhe4096" 00:20:35.776 } 00:20:35.776 } 00:20:35.776 ]' 00:20:35.776 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.776 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.776 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.776 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.776 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.776 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.776 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.776 21:50:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.034 21:50:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:36.034 21:50:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:36.602 21:50:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.860 21:50:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:36.860 21:50:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.860 21:50:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.860 21:50:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.860 21:50:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.860 21:50:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:36.860 21:50:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:36.860 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:36.860 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.860 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:36.860 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:36.860 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:36.860 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.860 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.860 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.860 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.860 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.860 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.860 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.860 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.119 00:20:37.119 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.119 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.377 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.377 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.378 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.378 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.378 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.378 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.378 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.378 { 00:20:37.378 "cntlid": 125, 00:20:37.378 "qid": 0, 00:20:37.378 "state": "enabled", 00:20:37.378 "thread": "nvmf_tgt_poll_group_000", 00:20:37.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:37.378 "listen_address": { 00:20:37.378 "trtype": "RDMA", 00:20:37.378 "adrfam": "IPv4", 00:20:37.378 "traddr": "192.168.100.8", 00:20:37.378 "trsvcid": "4420" 00:20:37.378 }, 00:20:37.378 "peer_address": { 00:20:37.378 "trtype": "RDMA", 00:20:37.378 "adrfam": "IPv4", 00:20:37.378 "traddr": "192.168.100.8", 00:20:37.378 "trsvcid": "56034" 00:20:37.378 }, 00:20:37.378 "auth": { 00:20:37.378 "state": "completed", 00:20:37.378 "digest": "sha512", 00:20:37.378 "dhgroup": "ffdhe4096" 00:20:37.378 } 00:20:37.378 } 00:20:37.378 ]' 00:20:37.378 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.378 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.378 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.637 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.637 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.637 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.637 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.637 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.896 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:37.896 21:50:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:38.464 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.464 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:38.464 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.464 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.464 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.464 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.464 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.464 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.722 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:38.722 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.722 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:38.722 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:38.722 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:38.722 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.722 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:38.722 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.722 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.722 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.722 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:38.722 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.723 21:50:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.980 00:20:38.981 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.981 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.981 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.239 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.239 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.239 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.240 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.240 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.240 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.240 { 00:20:39.240 "cntlid": 127, 00:20:39.240 "qid": 0, 00:20:39.240 "state": "enabled", 00:20:39.240 "thread": "nvmf_tgt_poll_group_000", 00:20:39.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:39.240 "listen_address": { 00:20:39.240 "trtype": "RDMA", 00:20:39.240 "adrfam": "IPv4", 00:20:39.240 "traddr": "192.168.100.8", 00:20:39.240 "trsvcid": "4420" 00:20:39.240 }, 00:20:39.240 "peer_address": { 00:20:39.240 "trtype": "RDMA", 00:20:39.240 "adrfam": "IPv4", 00:20:39.240 "traddr": "192.168.100.8", 00:20:39.240 "trsvcid": "40311" 00:20:39.240 }, 00:20:39.240 "auth": { 00:20:39.240 "state": "completed", 00:20:39.240 "digest": "sha512", 00:20:39.240 "dhgroup": "ffdhe4096" 00:20:39.240 } 00:20:39.240 } 00:20:39.240 ]' 00:20:39.240 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.240 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.240 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.240 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.240 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.240 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.240 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.240 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.498 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:39.498 21:50:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:40.066 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.325 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.893 00:20:40.893 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.893 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.893 21:50:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.893 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.893 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.893 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.893 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.893 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.893 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.893 { 00:20:40.893 "cntlid": 129, 00:20:40.893 "qid": 0, 00:20:40.893 "state": "enabled", 00:20:40.893 "thread": "nvmf_tgt_poll_group_000", 00:20:40.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:40.893 "listen_address": { 00:20:40.893 "trtype": "RDMA", 00:20:40.893 "adrfam": "IPv4", 00:20:40.893 "traddr": "192.168.100.8", 00:20:40.893 "trsvcid": "4420" 00:20:40.893 }, 00:20:40.893 "peer_address": { 00:20:40.893 "trtype": "RDMA", 00:20:40.893 "adrfam": "IPv4", 00:20:40.893 "traddr": "192.168.100.8", 00:20:40.893 "trsvcid": "34849" 00:20:40.893 }, 00:20:40.893 "auth": { 00:20:40.893 "state": "completed", 00:20:40.893 "digest": "sha512", 00:20:40.893 "dhgroup": "ffdhe6144" 00:20:40.893 } 00:20:40.893 } 00:20:40.893 ]' 00:20:40.893 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.151 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.151 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.151 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:41.151 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.151 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.151 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.151 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.534 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:41.534 21:50:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:41.813 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.071 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:42.071 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.071 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.071 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.071 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.071 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:42.071 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:42.329 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:42.329 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.329 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:42.329 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:42.329 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.329 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.329 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.329 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.329 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.329 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.329 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.329 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.329 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.587 00:20:42.587 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.587 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.587 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.845 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.845 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.845 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.845 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.845 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.845 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.845 { 00:20:42.845 "cntlid": 131, 00:20:42.845 "qid": 0, 00:20:42.845 "state": "enabled", 00:20:42.845 "thread": "nvmf_tgt_poll_group_000", 00:20:42.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:42.845 "listen_address": { 00:20:42.845 "trtype": "RDMA", 00:20:42.845 "adrfam": "IPv4", 00:20:42.845 "traddr": "192.168.100.8", 00:20:42.845 "trsvcid": "4420" 00:20:42.845 }, 00:20:42.845 "peer_address": { 00:20:42.845 "trtype": "RDMA", 00:20:42.845 "adrfam": "IPv4", 00:20:42.845 "traddr": "192.168.100.8", 00:20:42.845 "trsvcid": "38433" 00:20:42.845 }, 00:20:42.845 "auth": { 00:20:42.845 "state": "completed", 00:20:42.845 "digest": "sha512", 00:20:42.845 "dhgroup": "ffdhe6144" 00:20:42.845 } 00:20:42.845 } 00:20:42.845 ]' 00:20:42.845 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.845 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.845 21:50:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.845 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.845 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.845 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.845 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.845 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.103 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:43.103 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:43.666 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.924 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:43.924 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.924 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.924 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.924 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.924 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:43.924 21:50:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:43.924 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:43.924 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.924 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:43.924 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:43.924 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.924 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.924 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.924 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.924 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.924 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.924 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.924 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.924 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.489 00:20:44.490 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.490 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.490 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.490 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.490 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.490 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.490 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.490 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.490 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.490 { 00:20:44.490 "cntlid": 133, 00:20:44.490 "qid": 0, 00:20:44.490 "state": "enabled", 00:20:44.490 "thread": "nvmf_tgt_poll_group_000", 00:20:44.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:44.490 "listen_address": { 00:20:44.490 "trtype": "RDMA", 00:20:44.490 "adrfam": "IPv4", 00:20:44.490 "traddr": "192.168.100.8", 00:20:44.490 "trsvcid": "4420" 00:20:44.490 }, 00:20:44.490 "peer_address": { 00:20:44.490 "trtype": "RDMA", 00:20:44.490 "adrfam": "IPv4", 00:20:44.490 "traddr": "192.168.100.8", 00:20:44.490 "trsvcid": "36569" 00:20:44.490 }, 00:20:44.490 "auth": { 00:20:44.490 "state": "completed", 00:20:44.490 "digest": "sha512", 00:20:44.490 "dhgroup": "ffdhe6144" 00:20:44.490 } 00:20:44.490 } 00:20:44.490 ]' 00:20:44.490 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.490 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.490 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.748 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.748 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.748 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.748 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.748 21:50:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.005 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:45.005 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:45.571 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.571 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:45.571 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.571 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.571 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.571 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.571 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:45.571 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:45.830 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:45.830 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.830 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:45.830 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:45.830 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.830 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.830 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:45.830 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.830 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.830 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.830 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.830 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.830 21:50:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.088 00:20:46.088 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.088 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.088 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.347 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.347 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.347 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.347 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.347 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.347 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.347 { 00:20:46.347 "cntlid": 135, 00:20:46.347 "qid": 0, 00:20:46.347 "state": "enabled", 00:20:46.347 "thread": "nvmf_tgt_poll_group_000", 00:20:46.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:46.347 "listen_address": { 00:20:46.347 "trtype": "RDMA", 00:20:46.347 "adrfam": "IPv4", 00:20:46.347 "traddr": "192.168.100.8", 00:20:46.347 "trsvcid": "4420" 00:20:46.347 }, 00:20:46.347 "peer_address": { 00:20:46.347 "trtype": "RDMA", 00:20:46.347 "adrfam": "IPv4", 00:20:46.347 "traddr": "192.168.100.8", 00:20:46.347 "trsvcid": "55243" 00:20:46.347 }, 00:20:46.347 "auth": { 00:20:46.347 "state": "completed", 00:20:46.347 "digest": "sha512", 00:20:46.347 "dhgroup": "ffdhe6144" 00:20:46.347 } 00:20:46.347 } 00:20:46.347 ]' 00:20:46.347 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.347 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.347 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.347 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.347 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.605 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.605 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.605 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.605 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:46.606 21:50:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.540 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.541 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.541 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.541 21:50:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.107 00:20:48.107 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.107 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.107 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.365 { 00:20:48.365 "cntlid": 137, 00:20:48.365 "qid": 0, 00:20:48.365 "state": "enabled", 00:20:48.365 "thread": "nvmf_tgt_poll_group_000", 00:20:48.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:48.365 "listen_address": { 00:20:48.365 "trtype": "RDMA", 00:20:48.365 "adrfam": "IPv4", 00:20:48.365 "traddr": "192.168.100.8", 00:20:48.365 "trsvcid": "4420" 00:20:48.365 }, 00:20:48.365 "peer_address": { 00:20:48.365 "trtype": "RDMA", 00:20:48.365 "adrfam": "IPv4", 00:20:48.365 "traddr": "192.168.100.8", 00:20:48.365 "trsvcid": "33134" 00:20:48.365 }, 00:20:48.365 "auth": { 00:20:48.365 "state": "completed", 00:20:48.365 "digest": "sha512", 00:20:48.365 "dhgroup": "ffdhe8192" 00:20:48.365 } 00:20:48.365 } 00:20:48.365 ]' 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.365 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.624 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:48.624 21:50:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:49.189 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.446 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:49.446 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.446 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.446 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.446 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.446 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:49.446 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:49.705 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:49.705 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.705 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:49.705 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:49.705 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:49.705 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.705 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.705 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.705 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.705 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.705 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.705 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.705 21:50:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.963 00:20:49.963 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.963 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.963 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.222 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.222 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.222 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.222 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.222 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.222 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.222 { 00:20:50.222 "cntlid": 139, 00:20:50.222 "qid": 0, 00:20:50.222 "state": "enabled", 00:20:50.222 "thread": "nvmf_tgt_poll_group_000", 00:20:50.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:50.222 "listen_address": { 00:20:50.222 "trtype": "RDMA", 00:20:50.222 "adrfam": "IPv4", 00:20:50.222 "traddr": "192.168.100.8", 00:20:50.222 "trsvcid": "4420" 00:20:50.222 }, 00:20:50.222 "peer_address": { 00:20:50.222 "trtype": "RDMA", 00:20:50.222 "adrfam": "IPv4", 00:20:50.222 "traddr": "192.168.100.8", 00:20:50.222 "trsvcid": "60111" 00:20:50.222 }, 00:20:50.222 "auth": { 00:20:50.222 "state": "completed", 00:20:50.222 "digest": "sha512", 00:20:50.222 "dhgroup": "ffdhe8192" 00:20:50.222 } 00:20:50.222 } 00:20:50.222 ]' 00:20:50.222 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.222 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.222 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.482 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.482 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.482 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.482 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.482 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.482 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:50.482 21:50:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: --dhchap-ctrl-secret DHHC-1:02:Y2NjZGJlM2NhOGM0NDFkYzU3OTExNzdiM2NiODllYjE3MjBhYjQ2YTRlNmJjN2NjhS+IPg==: 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.417 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.675 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.675 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.675 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.675 21:50:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.933 00:20:51.933 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.933 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.933 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.192 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.192 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.192 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.192 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.192 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.192 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.192 { 00:20:52.192 "cntlid": 141, 00:20:52.192 "qid": 0, 00:20:52.192 "state": "enabled", 00:20:52.192 "thread": "nvmf_tgt_poll_group_000", 00:20:52.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:52.192 "listen_address": { 00:20:52.192 "trtype": "RDMA", 00:20:52.192 "adrfam": "IPv4", 00:20:52.192 "traddr": "192.168.100.8", 00:20:52.192 "trsvcid": "4420" 00:20:52.192 }, 00:20:52.192 "peer_address": { 00:20:52.192 "trtype": "RDMA", 00:20:52.192 "adrfam": "IPv4", 00:20:52.192 "traddr": "192.168.100.8", 00:20:52.192 "trsvcid": "57357" 00:20:52.192 }, 00:20:52.192 "auth": { 00:20:52.192 "state": "completed", 00:20:52.192 "digest": "sha512", 00:20:52.192 "dhgroup": "ffdhe8192" 00:20:52.192 } 00:20:52.192 } 00:20:52.192 ]' 00:20:52.192 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.192 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.192 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.450 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.450 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.450 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.450 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.450 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.709 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:52.709 21:50:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:01:ZTI3ODViZGY0N2UxN2VlYzVhYTIxY2I1OGVlZDczYzGdoUO0: 00:20:53.274 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.274 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:53.274 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.274 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.274 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.274 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.274 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:53.274 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:53.532 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:53.532 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.532 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:53.532 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:53.532 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:53.532 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.532 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:53.532 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.532 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.532 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.532 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.532 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.532 21:50:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.098 00:20:54.098 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.098 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.098 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.098 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.098 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.098 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.098 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.356 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.356 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.356 { 00:20:54.356 "cntlid": 143, 00:20:54.356 "qid": 0, 00:20:54.356 "state": "enabled", 00:20:54.356 "thread": "nvmf_tgt_poll_group_000", 00:20:54.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:54.356 "listen_address": { 00:20:54.356 "trtype": "RDMA", 00:20:54.356 "adrfam": "IPv4", 00:20:54.356 "traddr": "192.168.100.8", 00:20:54.356 "trsvcid": "4420" 00:20:54.356 }, 00:20:54.356 "peer_address": { 00:20:54.356 "trtype": "RDMA", 00:20:54.356 "adrfam": "IPv4", 00:20:54.356 "traddr": "192.168.100.8", 00:20:54.356 "trsvcid": "45346" 00:20:54.356 }, 00:20:54.356 "auth": { 00:20:54.356 "state": "completed", 00:20:54.356 "digest": "sha512", 00:20:54.356 "dhgroup": "ffdhe8192" 00:20:54.356 } 00:20:54.356 } 00:20:54.356 ]' 00:20:54.356 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.356 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.356 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.356 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.356 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.356 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.356 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.356 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.614 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:54.614 21:50:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:20:55.180 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.180 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:55.180 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.180 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.180 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.180 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:55.180 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:55.180 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:55.180 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:55.180 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:55.180 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:55.438 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:55.438 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.438 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.438 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:55.438 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:55.438 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.438 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.438 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.438 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.438 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.438 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.438 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.438 21:50:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.004 00:20:56.004 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.004 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.004 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.262 { 00:20:56.262 "cntlid": 145, 00:20:56.262 "qid": 0, 00:20:56.262 "state": "enabled", 00:20:56.262 "thread": "nvmf_tgt_poll_group_000", 00:20:56.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:56.262 "listen_address": { 00:20:56.262 "trtype": "RDMA", 00:20:56.262 "adrfam": "IPv4", 00:20:56.262 "traddr": "192.168.100.8", 00:20:56.262 "trsvcid": "4420" 00:20:56.262 }, 00:20:56.262 "peer_address": { 00:20:56.262 "trtype": "RDMA", 00:20:56.262 "adrfam": "IPv4", 00:20:56.262 "traddr": "192.168.100.8", 00:20:56.262 "trsvcid": "48196" 00:20:56.262 }, 00:20:56.262 "auth": { 00:20:56.262 "state": "completed", 00:20:56.262 "digest": "sha512", 00:20:56.262 "dhgroup": "ffdhe8192" 00:20:56.262 } 00:20:56.262 } 00:20:56.262 ]' 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.262 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.521 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:56.521 21:50:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjZiMDE5ZjU3ZmJhOWUzNzRhY2Y2MmQxMWU1MzRkMWQ4OGJkYTgyNmE0NjAzMjVjzLT7dg==: --dhchap-ctrl-secret DHHC-1:03:ZjE5NjMwM2EyNWE4MWIyMTFhOTVlMDRmYjRjM2Q5Y2FlZDUwMjZlMjkyN2EyMjZlOTE5NmU4YTkxOTYwNzNlM+Oj2EQ=: 00:20:57.087 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:57.345 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:57.603 request: 00:20:57.603 { 00:20:57.603 "name": "nvme0", 00:20:57.603 "trtype": "rdma", 00:20:57.603 "traddr": "192.168.100.8", 00:20:57.603 "adrfam": "ipv4", 00:20:57.603 "trsvcid": "4420", 00:20:57.603 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:57.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:57.603 "prchk_reftag": false, 00:20:57.603 "prchk_guard": false, 00:20:57.603 "hdgst": false, 00:20:57.603 "ddgst": false, 00:20:57.603 "dhchap_key": "key2", 00:20:57.603 "allow_unrecognized_csi": false, 00:20:57.603 "method": "bdev_nvme_attach_controller", 00:20:57.603 "req_id": 1 00:20:57.603 } 00:20:57.603 Got JSON-RPC error response 00:20:57.603 response: 00:20:57.603 { 00:20:57.603 "code": -5, 00:20:57.603 "message": "Input/output error" 00:20:57.603 } 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:57.603 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:57.861 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.861 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:57.861 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.861 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:57.861 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:57.861 21:50:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:58.119 request: 00:20:58.119 { 00:20:58.119 "name": "nvme0", 00:20:58.119 "trtype": "rdma", 00:20:58.119 "traddr": "192.168.100.8", 00:20:58.119 "adrfam": "ipv4", 00:20:58.119 "trsvcid": "4420", 00:20:58.119 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:58.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:58.119 "prchk_reftag": false, 00:20:58.119 "prchk_guard": false, 00:20:58.119 "hdgst": false, 00:20:58.119 "ddgst": false, 00:20:58.119 "dhchap_key": "key1", 00:20:58.119 "dhchap_ctrlr_key": "ckey2", 00:20:58.119 "allow_unrecognized_csi": false, 00:20:58.119 "method": "bdev_nvme_attach_controller", 00:20:58.119 "req_id": 1 00:20:58.119 } 00:20:58.119 Got JSON-RPC error response 00:20:58.119 response: 00:20:58.119 { 00:20:58.119 "code": -5, 00:20:58.119 "message": "Input/output error" 00:20:58.119 } 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.119 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:58.377 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.377 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.377 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.377 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.635 request: 00:20:58.635 { 00:20:58.635 "name": "nvme0", 00:20:58.635 "trtype": "rdma", 00:20:58.635 "traddr": "192.168.100.8", 00:20:58.635 "adrfam": "ipv4", 00:20:58.636 "trsvcid": "4420", 00:20:58.636 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:58.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:58.636 "prchk_reftag": false, 00:20:58.636 "prchk_guard": false, 00:20:58.636 "hdgst": false, 00:20:58.636 "ddgst": false, 00:20:58.636 "dhchap_key": "key1", 00:20:58.636 "dhchap_ctrlr_key": "ckey1", 00:20:58.636 "allow_unrecognized_csi": false, 00:20:58.636 "method": "bdev_nvme_attach_controller", 00:20:58.636 "req_id": 1 00:20:58.636 } 00:20:58.636 Got JSON-RPC error response 00:20:58.636 response: 00:20:58.636 { 00:20:58.636 "code": -5, 00:20:58.636 "message": "Input/output error" 00:20:58.636 } 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3048910 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3048910 ']' 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3048910 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:58.636 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3048910 00:20:58.894 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:58.894 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:58.894 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3048910' 00:20:58.894 killing process with pid 3048910 00:20:58.894 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3048910 00:20:58.894 21:50:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3048910 00:20:58.894 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:58.894 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:58.894 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:58.894 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.894 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3072816 00:20:58.894 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3072816 00:20:58.894 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3072816 ']' 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3072816 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3072816 ']' 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.152 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.410 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.410 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:59.410 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:59.410 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.410 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.410 null0 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oSp 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.qgn ]] 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qgn 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.pNs 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.hHm ]] 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hHm 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qAX 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Ixp ]] 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ixp 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.OdX 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.668 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:59.669 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.669 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.669 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.669 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:59.669 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.669 21:50:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.602 nvme0n1 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.602 { 00:21:00.602 "cntlid": 1, 00:21:00.602 "qid": 0, 00:21:00.602 "state": "enabled", 00:21:00.602 "thread": "nvmf_tgt_poll_group_000", 00:21:00.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:00.602 "listen_address": { 00:21:00.602 "trtype": "RDMA", 00:21:00.602 "adrfam": "IPv4", 00:21:00.602 "traddr": "192.168.100.8", 00:21:00.602 "trsvcid": "4420" 00:21:00.602 }, 00:21:00.602 "peer_address": { 00:21:00.602 "trtype": "RDMA", 00:21:00.602 "adrfam": "IPv4", 00:21:00.602 "traddr": "192.168.100.8", 00:21:00.602 "trsvcid": "48188" 00:21:00.602 }, 00:21:00.602 "auth": { 00:21:00.602 "state": "completed", 00:21:00.602 "digest": "sha512", 00:21:00.602 "dhgroup": "ffdhe8192" 00:21:00.602 } 00:21:00.602 } 00:21:00.602 ]' 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.602 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.860 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.860 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.860 21:50:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.860 21:50:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:21:00.860 21:50:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:21:01.792 21:50:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.792 21:50:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:01.792 21:50:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.792 21:50:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.792 21:50:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.792 21:50:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:01.792 21:50:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.792 21:50:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.792 21:50:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.792 21:50:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:01.792 21:50:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:02.051 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:02.051 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:02.051 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:02.051 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:02.051 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.051 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:02.051 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.051 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:02.051 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.051 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.051 request: 00:21:02.051 { 00:21:02.051 "name": "nvme0", 00:21:02.051 "trtype": "rdma", 00:21:02.051 "traddr": "192.168.100.8", 00:21:02.051 "adrfam": "ipv4", 00:21:02.051 "trsvcid": "4420", 00:21:02.051 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:02.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:02.051 "prchk_reftag": false, 00:21:02.051 "prchk_guard": false, 00:21:02.051 "hdgst": false, 00:21:02.051 "ddgst": false, 00:21:02.051 "dhchap_key": "key3", 00:21:02.051 "allow_unrecognized_csi": false, 00:21:02.051 "method": "bdev_nvme_attach_controller", 00:21:02.051 "req_id": 1 00:21:02.051 } 00:21:02.051 Got JSON-RPC error response 00:21:02.051 response: 00:21:02.051 { 00:21:02.051 "code": -5, 00:21:02.051 "message": "Input/output error" 00:21:02.051 } 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.309 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.567 request: 00:21:02.567 { 00:21:02.567 "name": "nvme0", 00:21:02.567 "trtype": "rdma", 00:21:02.567 "traddr": "192.168.100.8", 00:21:02.567 "adrfam": "ipv4", 00:21:02.567 "trsvcid": "4420", 00:21:02.567 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:02.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:02.567 "prchk_reftag": false, 00:21:02.567 "prchk_guard": false, 00:21:02.567 "hdgst": false, 00:21:02.567 "ddgst": false, 00:21:02.567 "dhchap_key": "key3", 00:21:02.567 "allow_unrecognized_csi": false, 00:21:02.567 "method": "bdev_nvme_attach_controller", 00:21:02.567 "req_id": 1 00:21:02.567 } 00:21:02.567 Got JSON-RPC error response 00:21:02.567 response: 00:21:02.567 { 00:21:02.567 "code": -5, 00:21:02.567 "message": "Input/output error" 00:21:02.567 } 00:21:02.567 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:02.567 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:02.567 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:02.567 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:02.567 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:02.567 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:02.567 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:02.568 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:02.568 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:02.568 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:02.825 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:02.826 21:50:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:03.082 request: 00:21:03.082 { 00:21:03.082 "name": "nvme0", 00:21:03.082 "trtype": "rdma", 00:21:03.082 "traddr": "192.168.100.8", 00:21:03.082 "adrfam": "ipv4", 00:21:03.082 "trsvcid": "4420", 00:21:03.082 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:03.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:03.082 "prchk_reftag": false, 00:21:03.082 "prchk_guard": false, 00:21:03.082 "hdgst": false, 00:21:03.082 "ddgst": false, 00:21:03.082 "dhchap_key": "key0", 00:21:03.082 "dhchap_ctrlr_key": "key1", 00:21:03.082 "allow_unrecognized_csi": false, 00:21:03.082 "method": "bdev_nvme_attach_controller", 00:21:03.082 "req_id": 1 00:21:03.082 } 00:21:03.082 Got JSON-RPC error response 00:21:03.082 response: 00:21:03.082 { 00:21:03.082 "code": -5, 00:21:03.082 "message": "Input/output error" 00:21:03.082 } 00:21:03.082 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:03.082 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:03.082 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:03.082 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:03.082 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:03.082 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:03.082 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:03.339 nvme0n1 00:21:03.339 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:03.339 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:03.339 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.597 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.597 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.597 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.856 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:21:03.856 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.856 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.856 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.856 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:03.856 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:03.856 21:50:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:04.422 nvme0n1 00:21:04.680 21:50:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:04.680 21:50:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:04.680 21:50:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.680 21:50:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.680 21:50:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:04.680 21:50:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.680 21:50:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.680 21:50:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.680 21:50:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:04.680 21:50:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.680 21:50:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:04.938 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.938 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:21:04.938 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: --dhchap-ctrl-secret DHHC-1:03:YTYzMDdkNWQxMmQ1Yzg2Yjk4OTQ0MjFlZGE0OGI0ZGY1N2Y0MDg4NzRlMDFlMzY5ZDVkNmY0MWM1NGUzYWU2NS9TgoM=: 00:21:05.504 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:05.504 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:05.504 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:05.504 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:05.504 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:05.504 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:05.504 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:05.504 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.504 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.763 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:05.763 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:05.763 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:05.763 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:05.763 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.763 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:05.763 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.763 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:05.763 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:05.763 21:50:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:06.330 request: 00:21:06.330 { 00:21:06.330 "name": "nvme0", 00:21:06.330 "trtype": "rdma", 00:21:06.330 "traddr": "192.168.100.8", 00:21:06.330 "adrfam": "ipv4", 00:21:06.330 "trsvcid": "4420", 00:21:06.330 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:06.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:06.330 "prchk_reftag": false, 00:21:06.330 "prchk_guard": false, 00:21:06.330 "hdgst": false, 00:21:06.330 "ddgst": false, 00:21:06.330 "dhchap_key": "key1", 00:21:06.330 "allow_unrecognized_csi": false, 00:21:06.330 "method": "bdev_nvme_attach_controller", 00:21:06.330 "req_id": 1 00:21:06.330 } 00:21:06.330 Got JSON-RPC error response 00:21:06.330 response: 00:21:06.330 { 00:21:06.330 "code": -5, 00:21:06.330 "message": "Input/output error" 00:21:06.330 } 00:21:06.330 21:50:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:06.330 21:50:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:06.330 21:50:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:06.330 21:50:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:06.330 21:50:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:06.330 21:50:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:06.330 21:50:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:06.896 nvme0n1 00:21:06.896 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:06.896 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:06.896 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.154 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.154 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.154 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.412 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:07.412 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.412 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.412 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.412 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:07.412 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:07.412 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:07.670 nvme0n1 00:21:07.670 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:07.670 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.670 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:07.928 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.928 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.928 21:50:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: '' 2s 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: ]] 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YWY2ZmRlZWY2N2M2OWU5MjAxYzZkYjFjMTg2OGU1YWSNjVq1: 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:08.187 21:50:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: 2s 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: ]] 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NzQyN2Q3NzJiNzRiNmIwNWMyNDQ5NmQxMThiNTNlZGMzYzkwNWJmOTk0ZWVjZWU1ZgbHPg==: 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:10.087 21:50:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:12.621 21:50:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:13.188 nvme0n1 00:21:13.188 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:13.188 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.188 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.188 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.188 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:13.188 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:13.445 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:13.445 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:13.445 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.703 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.703 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:13.703 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.703 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.703 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.703 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:13.703 21:50:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:13.962 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:13.962 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:13.962 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:14.221 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:14.480 request: 00:21:14.480 { 00:21:14.480 "name": "nvme0", 00:21:14.480 "dhchap_key": "key1", 00:21:14.480 "dhchap_ctrlr_key": "key3", 00:21:14.480 "method": "bdev_nvme_set_keys", 00:21:14.480 "req_id": 1 00:21:14.480 } 00:21:14.480 Got JSON-RPC error response 00:21:14.480 response: 00:21:14.480 { 00:21:14.480 "code": -13, 00:21:14.480 "message": "Permission denied" 00:21:14.480 } 00:21:14.480 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:14.480 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:14.480 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:14.480 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:14.480 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:14.480 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:14.480 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.739 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:14.739 21:50:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:15.675 21:50:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:15.675 21:50:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:15.675 21:50:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.935 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:15.935 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:15.935 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.935 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.935 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.935 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:15.935 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:15.935 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:16.872 nvme0n1 00:21:16.872 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:16.872 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.872 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.872 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.872 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:16.872 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:16.872 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:16.872 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:16.872 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.872 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:16.872 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.872 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:16.872 21:50:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:17.131 request: 00:21:17.131 { 00:21:17.131 "name": "nvme0", 00:21:17.131 "dhchap_key": "key2", 00:21:17.131 "dhchap_ctrlr_key": "key0", 00:21:17.131 "method": "bdev_nvme_set_keys", 00:21:17.131 "req_id": 1 00:21:17.131 } 00:21:17.131 Got JSON-RPC error response 00:21:17.131 response: 00:21:17.131 { 00:21:17.131 "code": -13, 00:21:17.131 "message": "Permission denied" 00:21:17.131 } 00:21:17.131 21:50:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:17.131 21:50:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:17.131 21:50:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:17.131 21:50:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:17.131 21:50:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:17.131 21:50:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:17.131 21:50:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.390 21:50:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:17.390 21:50:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:18.326 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:18.326 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:18.326 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3048986 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3048986 ']' 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3048986 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3048986 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3048986' 00:21:18.585 killing process with pid 3048986 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3048986 00:21:18.585 21:50:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3048986 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:18.849 rmmod nvme_rdma 00:21:18.849 rmmod nvme_fabrics 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 3072816 ']' 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 3072816 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3072816 ']' 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3072816 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:18.849 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3072816 00:21:19.109 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:19.109 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:19.109 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3072816' 00:21:19.109 killing process with pid 3072816 00:21:19.109 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3072816 00:21:19.109 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3072816 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.oSp /tmp/spdk.key-sha256.pNs /tmp/spdk.key-sha384.qAX /tmp/spdk.key-sha512.OdX /tmp/spdk.key-sha512.qgn /tmp/spdk.key-sha384.hHm /tmp/spdk.key-sha256.Ixp '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:21:19.367 00:21:19.367 real 2m41.208s 00:21:19.367 user 6m10.265s 00:21:19.367 sys 0m23.822s 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.367 ************************************ 00:21:19.367 END TEST nvmf_auth_target 00:21:19.367 ************************************ 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:19.367 ************************************ 00:21:19.367 START TEST nvmf_fuzz 00:21:19.367 ************************************ 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:21:19.367 * Looking for test storage... 00:21:19.367 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:21:19.367 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:19.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.627 --rc genhtml_branch_coverage=1 00:21:19.627 --rc genhtml_function_coverage=1 00:21:19.627 --rc genhtml_legend=1 00:21:19.627 --rc geninfo_all_blocks=1 00:21:19.627 --rc geninfo_unexecuted_blocks=1 00:21:19.627 00:21:19.627 ' 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:19.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.627 --rc genhtml_branch_coverage=1 00:21:19.627 --rc genhtml_function_coverage=1 00:21:19.627 --rc genhtml_legend=1 00:21:19.627 --rc geninfo_all_blocks=1 00:21:19.627 --rc geninfo_unexecuted_blocks=1 00:21:19.627 00:21:19.627 ' 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:19.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.627 --rc genhtml_branch_coverage=1 00:21:19.627 --rc genhtml_function_coverage=1 00:21:19.627 --rc genhtml_legend=1 00:21:19.627 --rc geninfo_all_blocks=1 00:21:19.627 --rc geninfo_unexecuted_blocks=1 00:21:19.627 00:21:19.627 ' 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:19.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.627 --rc genhtml_branch_coverage=1 00:21:19.627 --rc genhtml_function_coverage=1 00:21:19.627 --rc genhtml_legend=1 00:21:19.627 --rc geninfo_all_blocks=1 00:21:19.627 --rc geninfo_unexecuted_blocks=1 00:21:19.627 00:21:19.627 ' 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.627 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.627 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:19.628 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:21:19.628 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.628 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:19.628 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:19.628 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:19.628 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.628 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.628 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.628 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:19.628 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:19.628 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.628 21:50:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:26.198 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:26.198 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:26.198 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:26.199 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:26.199 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # is_hw=yes 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # rdma_device_init 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@526 -- # allocate_nic_ips 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:26.199 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:26.199 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:26.199 altname enp217s0f0np0 00:21:26.199 altname ens818f0np0 00:21:26.199 inet 192.168.100.8/24 scope global mlx_0_0 00:21:26.199 valid_lft forever preferred_lft forever 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:26.199 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:26.199 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:26.199 altname enp217s0f1np1 00:21:26.199 altname ens818f1np1 00:21:26.199 inet 192.168.100.9/24 scope global mlx_0_1 00:21:26.199 valid_lft forever preferred_lft forever 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # return 0 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:26.199 21:50:57 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:21:26.199 192.168.100.9' 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # head -n 1 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:21:26.199 192.168.100.9' 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:21:26.199 192.168.100.9' 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # tail -n +2 00:21:26.199 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # head -n 1 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3079579 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3079579 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3079579 ']' 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:26.200 Malloc0 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:21:26.200 21:50:58 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:21:58.404 Fuzzing completed. Shutting down the fuzz application 00:21:58.404 00:21:58.404 Dumping successful admin opcodes: 00:21:58.404 8, 9, 10, 24, 00:21:58.404 Dumping successful io opcodes: 00:21:58.404 0, 9, 00:21:58.404 NS: 0x200003af1f00 I/O qp, Total commands completed: 1073211, total successful commands: 6310, random_seed: 3225457408 00:21:58.404 NS: 0x200003af1f00 admin qp, Total commands completed: 145360, total successful commands: 1179, random_seed: 2773206976 00:21:58.404 21:51:28 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:58.404 Fuzzing completed. Shutting down the fuzz application 00:21:58.404 00:21:58.404 Dumping successful admin opcodes: 00:21:58.404 24, 00:21:58.404 Dumping successful io opcodes: 00:21:58.404 00:21:58.404 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1757860068 00:21:58.404 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1757923274 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:58.404 rmmod nvme_rdma 00:21:58.404 rmmod nvme_fabrics 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 3079579 ']' 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 3079579 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3079579 ']' 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 3079579 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3079579 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3079579' 00:21:58.404 killing process with pid 3079579 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 3079579 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 3079579 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:58.404 00:21:58.404 real 0m39.006s 00:21:58.404 user 0m50.537s 00:21:58.404 sys 0m19.573s 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:58.404 ************************************ 00:21:58.404 END TEST nvmf_fuzz 00:21:58.404 ************************************ 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:58.404 ************************************ 00:21:58.404 START TEST nvmf_multiconnection 00:21:58.404 ************************************ 00:21:58.404 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:58.404 * Looking for test storage... 00:21:58.663 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:58.663 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:58.663 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:21:58.663 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:58.663 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:58.663 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:58.663 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:58.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.664 --rc genhtml_branch_coverage=1 00:21:58.664 --rc genhtml_function_coverage=1 00:21:58.664 --rc genhtml_legend=1 00:21:58.664 --rc geninfo_all_blocks=1 00:21:58.664 --rc geninfo_unexecuted_blocks=1 00:21:58.664 00:21:58.664 ' 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:58.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.664 --rc genhtml_branch_coverage=1 00:21:58.664 --rc genhtml_function_coverage=1 00:21:58.664 --rc genhtml_legend=1 00:21:58.664 --rc geninfo_all_blocks=1 00:21:58.664 --rc geninfo_unexecuted_blocks=1 00:21:58.664 00:21:58.664 ' 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:58.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.664 --rc genhtml_branch_coverage=1 00:21:58.664 --rc genhtml_function_coverage=1 00:21:58.664 --rc genhtml_legend=1 00:21:58.664 --rc geninfo_all_blocks=1 00:21:58.664 --rc geninfo_unexecuted_blocks=1 00:21:58.664 00:21:58.664 ' 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:58.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.664 --rc genhtml_branch_coverage=1 00:21:58.664 --rc genhtml_function_coverage=1 00:21:58.664 --rc genhtml_legend=1 00:21:58.664 --rc geninfo_all_blocks=1 00:21:58.664 --rc geninfo_unexecuted_blocks=1 00:21:58.664 00:21:58.664 ' 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.664 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:58.665 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:21:58.665 21:51:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.224 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:05.225 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:05.225 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:05.225 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:05.225 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # is_hw=yes 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # rdma_device_init 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@526 -- # allocate_nic_ips 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:05.225 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:05.225 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:05.225 altname enp217s0f0np0 00:22:05.225 altname ens818f0np0 00:22:05.225 inet 192.168.100.8/24 scope global mlx_0_0 00:22:05.225 valid_lft forever preferred_lft forever 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:05.225 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:05.225 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:05.226 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:05.226 altname enp217s0f1np1 00:22:05.226 altname ens818f1np1 00:22:05.226 inet 192.168.100.9/24 scope global mlx_0_1 00:22:05.226 valid_lft forever preferred_lft forever 00:22:05.226 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # return 0 00:22:05.226 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:05.226 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:05.226 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:22:05.226 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:22:05.226 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:05.226 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:05.226 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:05.226 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:05.226 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:05.484 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:22:05.485 192.168.100.9' 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:22:05.485 192.168.100.9' 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # head -n 1 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:22:05.485 192.168.100.9' 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # tail -n +2 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # head -n 1 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=3088852 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 3088852 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 3088852 ']' 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:05.485 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:05.485 [2024-11-29 21:51:37.622824] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:05.485 [2024-11-29 21:51:37.622880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.485 [2024-11-29 21:51:37.693017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.744 [2024-11-29 21:51:37.734548] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.744 [2024-11-29 21:51:37.734586] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.744 [2024-11-29 21:51:37.734595] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.744 [2024-11-29 21:51:37.734603] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.744 [2024-11-29 21:51:37.734610] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.744 [2024-11-29 21:51:37.734650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.744 [2024-11-29 21:51:37.734673] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.744 [2024-11-29 21:51:37.734695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.744 [2024-11-29 21:51:37.734696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.744 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.744 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:22:05.744 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:05.744 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.744 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:05.744 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.744 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:05.744 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.744 21:51:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:05.744 [2024-11-29 21:51:37.907221] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14a4f50/0x14a9400) succeed. 00:22:05.744 [2024-11-29 21:51:37.917495] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14a6540/0x14eaaa0) succeed. 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.003 Malloc1 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.003 [2024-11-29 21:51:38.091346] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.003 Malloc2 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.003 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.004 Malloc3 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.004 Malloc4 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.004 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.262 Malloc5 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.262 Malloc6 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.262 Malloc7 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.262 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 Malloc8 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 Malloc9 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 Malloc10 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.263 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.520 Malloc11 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.520 21:51:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:07.452 21:51:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:07.452 21:51:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:07.452 21:51:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:07.452 21:51:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:07.452 21:51:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:09.352 21:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:09.352 21:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:09.352 21:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:22:09.352 21:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:09.352 21:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:09.352 21:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:09.352 21:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:09.352 21:51:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:22:10.726 21:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:10.726 21:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:10.726 21:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:10.726 21:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:10.726 21:51:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:12.625 21:51:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:12.625 21:51:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:12.625 21:51:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:22:12.625 21:51:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:12.625 21:51:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:12.625 21:51:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:12.625 21:51:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.625 21:51:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:22:13.558 21:51:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:13.559 21:51:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:13.559 21:51:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:13.559 21:51:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:13.559 21:51:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:15.458 21:51:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:15.458 21:51:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:15.458 21:51:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:22:15.458 21:51:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:15.458 21:51:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:15.458 21:51:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:15.458 21:51:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:15.458 21:51:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:22:16.391 21:51:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:16.391 21:51:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:16.391 21:51:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:16.391 21:51:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:16.391 21:51:48 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:18.918 21:51:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:18.918 21:51:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:18.918 21:51:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:22:18.918 21:51:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:18.918 21:51:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:18.918 21:51:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:18.918 21:51:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:18.918 21:51:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:22:19.484 21:51:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:19.484 21:51:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:19.484 21:51:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:19.484 21:51:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:19.484 21:51:51 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:21.383 21:51:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:21.383 21:51:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:21.383 21:51:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:22:21.383 21:51:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:21.383 21:51:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:21.383 21:51:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:21.383 21:51:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:21.383 21:51:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:22:22.756 21:51:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:22.756 21:51:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:22.756 21:51:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:22.756 21:51:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:22.756 21:51:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:24.652 21:51:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:24.652 21:51:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:24.652 21:51:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:22:24.652 21:51:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:24.652 21:51:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:24.652 21:51:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:24.652 21:51:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.652 21:51:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:22:25.585 21:51:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:25.585 21:51:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:25.585 21:51:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:25.585 21:51:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:25.585 21:51:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:27.483 21:51:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:27.483 21:51:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:27.483 21:51:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:22:27.483 21:51:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:27.483 21:51:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:27.483 21:51:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:27.483 21:51:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:27.483 21:51:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:22:28.526 21:52:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:28.526 21:52:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:28.526 21:52:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:28.526 21:52:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:28.526 21:52:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:30.429 21:52:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:30.429 21:52:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:30.429 21:52:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:22:30.429 21:52:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:30.429 21:52:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:30.429 21:52:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:30.429 21:52:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:30.429 21:52:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:22:31.805 21:52:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:31.805 21:52:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:31.805 21:52:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:31.805 21:52:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:31.805 21:52:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:33.706 21:52:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:33.706 21:52:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:33.706 21:52:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:22:33.706 21:52:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:33.706 21:52:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:33.706 21:52:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:33.706 21:52:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.706 21:52:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:22:34.640 21:52:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:34.640 21:52:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:34.640 21:52:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:34.640 21:52:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:34.640 21:52:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:36.541 21:52:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:36.541 21:52:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:36.541 21:52:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:22:36.541 21:52:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:36.541 21:52:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:36.541 21:52:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:36.541 21:52:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.541 21:52:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:22:37.476 21:52:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:37.476 21:52:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:22:37.476 21:52:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:37.476 21:52:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:22:37.476 21:52:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:22:40.005 21:52:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:40.005 21:52:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:40.005 21:52:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:22:40.005 21:52:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:22:40.005 21:52:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:40.005 21:52:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:22:40.005 21:52:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:40.005 [global] 00:22:40.005 thread=1 00:22:40.005 invalidate=1 00:22:40.005 rw=read 00:22:40.005 time_based=1 00:22:40.005 runtime=10 00:22:40.005 ioengine=libaio 00:22:40.005 direct=1 00:22:40.005 bs=262144 00:22:40.005 iodepth=64 00:22:40.005 norandommap=1 00:22:40.005 numjobs=1 00:22:40.005 00:22:40.005 [job0] 00:22:40.005 filename=/dev/nvme0n1 00:22:40.005 [job1] 00:22:40.005 filename=/dev/nvme10n1 00:22:40.005 [job2] 00:22:40.005 filename=/dev/nvme1n1 00:22:40.005 [job3] 00:22:40.005 filename=/dev/nvme2n1 00:22:40.005 [job4] 00:22:40.005 filename=/dev/nvme3n1 00:22:40.005 [job5] 00:22:40.005 filename=/dev/nvme4n1 00:22:40.005 [job6] 00:22:40.005 filename=/dev/nvme5n1 00:22:40.005 [job7] 00:22:40.005 filename=/dev/nvme6n1 00:22:40.005 [job8] 00:22:40.005 filename=/dev/nvme7n1 00:22:40.005 [job9] 00:22:40.005 filename=/dev/nvme8n1 00:22:40.005 [job10] 00:22:40.005 filename=/dev/nvme9n1 00:22:40.005 Could not set queue depth (nvme0n1) 00:22:40.005 Could not set queue depth (nvme10n1) 00:22:40.005 Could not set queue depth (nvme1n1) 00:22:40.005 Could not set queue depth (nvme2n1) 00:22:40.005 Could not set queue depth (nvme3n1) 00:22:40.005 Could not set queue depth (nvme4n1) 00:22:40.005 Could not set queue depth (nvme5n1) 00:22:40.005 Could not set queue depth (nvme6n1) 00:22:40.005 Could not set queue depth (nvme7n1) 00:22:40.005 Could not set queue depth (nvme8n1) 00:22:40.005 Could not set queue depth (nvme9n1) 00:22:40.005 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:40.005 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:40.005 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:40.005 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:40.005 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:40.005 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:40.005 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:40.005 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:40.005 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:40.005 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:40.005 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:40.005 fio-3.35 00:22:40.005 Starting 11 threads 00:22:52.208 00:22:52.208 job0: (groupid=0, jobs=1): err= 0: pid=3094996: Fri Nov 29 21:52:22 2024 00:22:52.208 read: IOPS=1591, BW=398MiB/s (417MB/s)(3990MiB/10030msec) 00:22:52.208 slat (usec): min=11, max=17485, avg=622.86, stdev=1561.09 00:22:52.208 clat (usec): min=9503, max=77773, avg=39552.05, stdev=13029.40 00:22:52.208 lat (usec): min=9751, max=80613, avg=40174.91, stdev=13283.67 00:22:52.208 clat percentiles (usec): 00:22:52.208 | 1.00th=[13304], 5.00th=[15008], 10.00th=[29492], 20.00th=[31065], 00:22:52.208 | 30.00th=[32113], 40.00th=[32637], 50.00th=[34341], 60.00th=[45351], 00:22:52.208 | 70.00th=[46924], 80.00th=[49021], 90.00th=[61080], 95.00th=[63701], 00:22:52.208 | 99.00th=[66847], 99.50th=[69731], 99.90th=[73925], 99.95th=[77071], 00:22:52.208 | 99.99th=[78119] 00:22:52.208 bw ( KiB/s): min=254976, max=834560, per=10.10%, avg=406988.80, stdev=135089.98, samples=20 00:22:52.208 iops : min= 996, max= 3260, avg=1589.80, stdev=527.70, samples=20 00:22:52.208 lat (msec) : 10=0.03%, 20=7.79%, 50=75.23%, 100=16.96% 00:22:52.208 cpu : usr=0.61%, sys=6.50%, ctx=2992, majf=0, minf=4097 00:22:52.208 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:52.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.208 issued rwts: total=15961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.208 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.208 job1: (groupid=0, jobs=1): err= 0: pid=3095017: Fri Nov 29 21:52:22 2024 00:22:52.208 read: IOPS=857, BW=214MiB/s (225MB/s)(2154MiB/10052msec) 00:22:52.208 slat (usec): min=12, max=24279, avg=1149.23, stdev=3006.58 00:22:52.208 clat (msec): min=12, max=111, avg=73.43, stdev=11.95 00:22:52.208 lat (msec): min=12, max=120, avg=74.58, stdev=12.42 00:22:52.208 clat percentiles (msec): 00:22:52.208 | 1.00th=[ 53], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 63], 00:22:52.208 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 74], 60.00th=[ 81], 00:22:52.208 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 89], 95.00th=[ 93], 00:22:52.208 | 99.00th=[ 99], 99.50th=[ 102], 99.90th=[ 108], 99.95th=[ 111], 00:22:52.208 | 99.99th=[ 112] 00:22:52.208 bw ( KiB/s): min=176640, max=267776, per=5.44%, avg=218982.40, stdev=31591.12, samples=20 00:22:52.208 iops : min= 690, max= 1046, avg=855.40, stdev=123.40, samples=20 00:22:52.208 lat (msec) : 20=0.24%, 50=0.50%, 100=98.64%, 250=0.62% 00:22:52.208 cpu : usr=0.41%, sys=4.14%, ctx=1668, majf=0, minf=3659 00:22:52.208 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:52.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.208 issued rwts: total=8617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.208 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.208 job2: (groupid=0, jobs=1): err= 0: pid=3095036: Fri Nov 29 21:52:22 2024 00:22:52.208 read: IOPS=925, BW=231MiB/s (243MB/s)(2324MiB/10042msec) 00:22:52.208 slat (usec): min=11, max=50106, avg=1028.82, stdev=3583.38 00:22:52.208 clat (msec): min=13, max=134, avg=68.04, stdev=17.39 00:22:52.208 lat (msec): min=13, max=134, avg=69.07, stdev=17.96 00:22:52.208 clat percentiles (msec): 00:22:52.208 | 1.00th=[ 32], 5.00th=[ 46], 10.00th=[ 46], 20.00th=[ 47], 00:22:52.208 | 30.00th=[ 55], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 80], 00:22:52.208 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 89], 95.00th=[ 93], 00:22:52.208 | 99.00th=[ 102], 99.50th=[ 106], 99.90th=[ 128], 99.95th=[ 130], 00:22:52.208 | 99.99th=[ 136] 00:22:52.208 bw ( KiB/s): min=173568, max=342528, per=5.87%, avg=236364.80, stdev=54975.77, samples=20 00:22:52.208 iops : min= 678, max= 1338, avg=923.30, stdev=214.75, samples=20 00:22:52.208 lat (msec) : 20=0.42%, 50=27.50%, 100=71.03%, 250=1.05% 00:22:52.208 cpu : usr=0.49%, sys=3.91%, ctx=2048, majf=0, minf=4097 00:22:52.208 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:52.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.208 issued rwts: total=9296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.208 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.208 job3: (groupid=0, jobs=1): err= 0: pid=3095052: Fri Nov 29 21:52:22 2024 00:22:52.208 read: IOPS=1196, BW=299MiB/s (314MB/s)(3000MiB/10030msec) 00:22:52.208 slat (usec): min=11, max=54827, avg=726.37, stdev=3007.58 00:22:52.208 clat (usec): min=757, max=132997, avg=52725.19, stdev=24300.41 00:22:52.208 lat (usec): min=800, max=141735, avg=53451.56, stdev=24816.12 00:22:52.208 clat percentiles (msec): 00:22:52.208 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 26], 20.00th=[ 32], 00:22:52.208 | 30.00th=[ 34], 40.00th=[ 36], 50.00th=[ 51], 60.00th=[ 65], 00:22:52.208 | 70.00th=[ 68], 80.00th=[ 81], 90.00th=[ 83], 95.00th=[ 88], 00:22:52.208 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 110], 99.95th=[ 120], 00:22:52.208 | 99.99th=[ 133] 00:22:52.208 bw ( KiB/s): min=189440, max=530944, per=7.59%, avg=305583.95, stdev=117654.52, samples=20 00:22:52.208 iops : min= 740, max= 2074, avg=1193.65, stdev=459.53, samples=20 00:22:52.208 lat (usec) : 1000=0.03% 00:22:52.208 lat (msec) : 2=0.38%, 4=1.02%, 10=1.41%, 20=4.03%, 50=42.90% 00:22:52.208 lat (msec) : 100=49.89%, 250=0.34% 00:22:52.208 cpu : usr=0.44%, sys=4.85%, ctx=3331, majf=0, minf=4097 00:22:52.208 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:52.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.208 issued rwts: total=11998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.208 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.208 job4: (groupid=0, jobs=1): err= 0: pid=3095057: Fri Nov 29 21:52:22 2024 00:22:52.208 read: IOPS=997, BW=249MiB/s (261MB/s)(2502MiB/10031msec) 00:22:52.208 slat (usec): min=12, max=29237, avg=995.31, stdev=2784.20 00:22:52.208 clat (msec): min=13, max=126, avg=63.10, stdev=22.07 00:22:52.208 lat (msec): min=14, max=126, avg=64.10, stdev=22.54 00:22:52.208 clat percentiles (msec): 00:22:52.208 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 34], 00:22:52.208 | 30.00th=[ 42], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 78], 00:22:52.208 | 70.00th=[ 81], 80.00th=[ 83], 90.00th=[ 88], 95.00th=[ 92], 00:22:52.208 | 99.00th=[ 100], 99.50th=[ 102], 99.90th=[ 112], 99.95th=[ 116], 00:22:52.208 | 99.99th=[ 122] 00:22:52.208 bw ( KiB/s): min=174592, max=501760, per=6.32%, avg=254559.45, stdev=104527.30, samples=20 00:22:52.208 iops : min= 682, max= 1960, avg=994.35, stdev=408.33, samples=20 00:22:52.208 lat (msec) : 20=0.16%, 50=32.67%, 100=66.51%, 250=0.66% 00:22:52.208 cpu : usr=0.41%, sys=4.49%, ctx=1867, majf=0, minf=4097 00:22:52.208 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:52.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.209 issued rwts: total=10006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.209 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.209 job5: (groupid=0, jobs=1): err= 0: pid=3095080: Fri Nov 29 21:52:22 2024 00:22:52.209 read: IOPS=1509, BW=377MiB/s (396MB/s)(3789MiB/10042msec) 00:22:52.209 slat (usec): min=10, max=28924, avg=592.53, stdev=1833.48 00:22:52.209 clat (usec): min=9176, max=93147, avg=41763.90, stdev=12162.02 00:22:52.209 lat (usec): min=9420, max=93191, avg=42356.44, stdev=12441.01 00:22:52.209 clat percentiles (usec): 00:22:52.209 | 1.00th=[17957], 5.00th=[28705], 10.00th=[30016], 20.00th=[30802], 00:22:52.209 | 30.00th=[31589], 40.00th=[32900], 50.00th=[44303], 60.00th=[46400], 00:22:52.209 | 70.00th=[47449], 80.00th=[49021], 90.00th=[62129], 95.00th=[63701], 00:22:52.209 | 99.00th=[67634], 99.50th=[70779], 99.90th=[85459], 99.95th=[87557], 00:22:52.209 | 99.99th=[92799] 00:22:52.209 bw ( KiB/s): min=254464, max=519168, per=9.59%, avg=386406.40, stdev=77272.66, samples=20 00:22:52.209 iops : min= 994, max= 2028, avg=1509.40, stdev=301.85, samples=20 00:22:52.209 lat (msec) : 10=0.02%, 20=1.44%, 50=80.17%, 100=18.37% 00:22:52.209 cpu : usr=0.39%, sys=5.24%, ctx=3926, majf=0, minf=4097 00:22:52.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:52.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.209 issued rwts: total=15157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.209 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.209 job6: (groupid=0, jobs=1): err= 0: pid=3095091: Fri Nov 29 21:52:22 2024 00:22:52.209 read: IOPS=1274, BW=319MiB/s (334MB/s)(3195MiB/10030msec) 00:22:52.209 slat (usec): min=12, max=23247, avg=765.73, stdev=2039.69 00:22:52.209 clat (msec): min=13, max=114, avg=49.41, stdev=20.20 00:22:52.209 lat (msec): min=13, max=114, avg=50.18, stdev=20.56 00:22:52.209 clat percentiles (msec): 00:22:52.209 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 33], 00:22:52.209 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 42], 60.00th=[ 50], 00:22:52.209 | 70.00th=[ 63], 80.00th=[ 67], 90.00th=[ 82], 95.00th=[ 88], 00:22:52.209 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 105], 99.95th=[ 110], 00:22:52.209 | 99.99th=[ 114] 00:22:52.209 bw ( KiB/s): min=180736, max=503808, per=8.08%, avg=325529.60, stdev=127037.01, samples=20 00:22:52.209 iops : min= 706, max= 1968, avg=1271.60, stdev=496.24, samples=20 00:22:52.209 lat (msec) : 20=0.21%, 50=60.69%, 100=38.81%, 250=0.29% 00:22:52.209 cpu : usr=0.34%, sys=5.76%, ctx=2487, majf=0, minf=4097 00:22:52.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:52.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.209 issued rwts: total=12779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.209 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.209 job7: (groupid=0, jobs=1): err= 0: pid=3095096: Fri Nov 29 21:52:22 2024 00:22:52.209 read: IOPS=2364, BW=591MiB/s (620MB/s)(5942MiB/10051msec) 00:22:52.209 slat (usec): min=10, max=69176, avg=409.24, stdev=1367.49 00:22:52.209 clat (usec): min=733, max=157330, avg=26626.41, stdev=16648.37 00:22:52.209 lat (usec): min=775, max=157975, avg=27035.65, stdev=16926.19 00:22:52.209 clat percentiles (msec): 00:22:52.209 | 1.00th=[ 7], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:22:52.209 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 24], 00:22:52.209 | 70.00th=[ 31], 80.00th=[ 44], 90.00th=[ 58], 95.00th=[ 61], 00:22:52.209 | 99.00th=[ 72], 99.50th=[ 90], 99.90th=[ 99], 99.95th=[ 107], 00:22:52.209 | 99.99th=[ 157] 00:22:52.209 bw ( KiB/s): min=259072, max=1072128, per=15.06%, avg=606831.65, stdev=296093.28, samples=20 00:22:52.209 iops : min= 1012, max= 4188, avg=2370.40, stdev=1156.65, samples=20 00:22:52.209 lat (usec) : 750=0.01%, 1000=0.07% 00:22:52.209 lat (msec) : 2=0.18%, 4=0.42%, 10=1.54%, 20=55.01%, 50=31.43% 00:22:52.209 lat (msec) : 100=11.26%, 250=0.08% 00:22:52.209 cpu : usr=0.45%, sys=6.62%, ctx=4982, majf=0, minf=4097 00:22:52.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:22:52.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.209 issued rwts: total=23766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.209 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.209 job8: (groupid=0, jobs=1): err= 0: pid=3095106: Fri Nov 29 21:52:22 2024 00:22:52.209 read: IOPS=2408, BW=602MiB/s (631MB/s)(6051MiB/10050msec) 00:22:52.209 slat (usec): min=11, max=26872, avg=408.74, stdev=1292.18 00:22:52.209 clat (msec): min=11, max=107, avg=26.14, stdev=15.72 00:22:52.209 lat (msec): min=11, max=118, avg=26.55, stdev=15.99 00:22:52.209 clat percentiles (msec): 00:22:52.209 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 15], 00:22:52.209 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 23], 00:22:52.209 | 70.00th=[ 32], 80.00th=[ 44], 90.00th=[ 53], 95.00th=[ 61], 00:22:52.209 | 99.00th=[ 64], 99.50th=[ 68], 99.90th=[ 86], 99.95th=[ 90], 00:22:52.209 | 99.99th=[ 108] 00:22:52.209 bw ( KiB/s): min=257024, max=1129472, per=15.34%, avg=617958.40, stdev=313636.43, samples=20 00:22:52.209 iops : min= 1004, max= 4412, avg=2413.90, stdev=1225.14, samples=20 00:22:52.209 lat (msec) : 20=57.28%, 50=32.34%, 100=10.35%, 250=0.02% 00:22:52.209 cpu : usr=0.66%, sys=6.88%, ctx=4298, majf=0, minf=4097 00:22:52.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:22:52.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.209 issued rwts: total=24202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.209 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.209 job9: (groupid=0, jobs=1): err= 0: pid=3095107: Fri Nov 29 21:52:22 2024 00:22:52.209 read: IOPS=1441, BW=360MiB/s (378MB/s)(3619MiB/10040msec) 00:22:52.209 slat (usec): min=11, max=22354, avg=677.57, stdev=1690.81 00:22:52.209 clat (usec): min=1078, max=100115, avg=43666.22, stdev=13906.24 00:22:52.209 lat (usec): min=1129, max=103192, avg=44343.78, stdev=14185.39 00:22:52.209 clat percentiles (usec): 00:22:52.209 | 1.00th=[ 7439], 5.00th=[23987], 10.00th=[30278], 20.00th=[31065], 00:22:52.209 | 30.00th=[32637], 40.00th=[44827], 50.00th=[46400], 60.00th=[46924], 00:22:52.209 | 70.00th=[47973], 80.00th=[51119], 90.00th=[62653], 95.00th=[64750], 00:22:52.209 | 99.00th=[88605], 99.50th=[90702], 99.90th=[94897], 99.95th=[96994], 00:22:52.209 | 99.99th=[98042] 00:22:52.209 bw ( KiB/s): min=238592, max=681472, per=9.16%, avg=368972.80, stdev=110673.84, samples=20 00:22:52.209 iops : min= 932, max= 2662, avg=1441.30, stdev=432.32, samples=20 00:22:52.209 lat (msec) : 2=0.14%, 4=0.24%, 10=0.86%, 20=2.94%, 50=74.10% 00:22:52.209 lat (msec) : 100=21.71%, 250=0.01% 00:22:52.209 cpu : usr=0.39%, sys=6.02%, ctx=3002, majf=0, minf=4097 00:22:52.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:52.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.209 issued rwts: total=14476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.209 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.209 job10: (groupid=0, jobs=1): err= 0: pid=3095108: Fri Nov 29 21:52:22 2024 00:22:52.209 read: IOPS=1185, BW=296MiB/s (311MB/s)(2977MiB/10049msec) 00:22:52.209 slat (usec): min=12, max=52327, avg=808.32, stdev=2727.92 00:22:52.209 clat (msec): min=14, max=140, avg=53.14, stdev=22.68 00:22:52.209 lat (msec): min=14, max=140, avg=53.95, stdev=23.14 00:22:52.209 clat percentiles (msec): 00:22:52.209 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:22:52.209 | 30.00th=[ 33], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 60], 00:22:52.209 | 70.00th=[ 63], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 91], 00:22:52.209 | 99.00th=[ 99], 99.50th=[ 102], 99.90th=[ 120], 99.95th=[ 140], 00:22:52.209 | 99.99th=[ 142] 00:22:52.209 bw ( KiB/s): min=186228, max=540672, per=7.53%, avg=303276.20, stdev=113224.31, samples=20 00:22:52.209 iops : min= 727, max= 2112, avg=1184.65, stdev=442.31, samples=20 00:22:52.209 lat (msec) : 20=0.92%, 50=55.28%, 100=43.16%, 250=0.64% 00:22:52.209 cpu : usr=0.39%, sys=4.93%, ctx=2505, majf=0, minf=4097 00:22:52.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:52.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.209 issued rwts: total=11909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.209 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.209 00:22:52.209 Run status group 0 (all jobs): 00:22:52.209 READ: bw=3934MiB/s (4125MB/s), 214MiB/s-602MiB/s (225MB/s-631MB/s), io=38.6GiB (41.5GB), run=10030-10052msec 00:22:52.209 00:22:52.209 Disk stats (read/write): 00:22:52.209 nvme0n1: ios=31329/0, merge=0/0, ticks=1219621/0, in_queue=1219621, util=96.73% 00:22:52.209 nvme10n1: ios=16860/0, merge=0/0, ticks=1222998/0, in_queue=1222998, util=97.04% 00:22:52.209 nvme1n1: ios=18139/0, merge=0/0, ticks=1221376/0, in_queue=1221376, util=97.38% 00:22:52.209 nvme2n1: ios=23428/0, merge=0/0, ticks=1226573/0, in_queue=1226573, util=97.55% 00:22:52.209 nvme3n1: ios=19420/0, merge=0/0, ticks=1222471/0, in_queue=1222471, util=97.64% 00:22:52.209 nvme4n1: ios=29870/0, merge=0/0, ticks=1221351/0, in_queue=1221351, util=98.06% 00:22:52.209 nvme5n1: ios=24998/0, merge=0/0, ticks=1222351/0, in_queue=1222351, util=98.27% 00:22:52.209 nvme6n1: ios=47168/0, merge=0/0, ticks=1215345/0, in_queue=1215345, util=98.40% 00:22:52.209 nvme7n1: ios=48031/0, merge=0/0, ticks=1216700/0, in_queue=1216700, util=98.90% 00:22:52.209 nvme8n1: ios=28521/0, merge=0/0, ticks=1218735/0, in_queue=1218735, util=99.14% 00:22:52.209 nvme9n1: ios=23440/0, merge=0/0, ticks=1218522/0, in_queue=1218522, util=99.29% 00:22:52.209 21:52:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:52.209 [global] 00:22:52.209 thread=1 00:22:52.209 invalidate=1 00:22:52.209 rw=randwrite 00:22:52.209 time_based=1 00:22:52.209 runtime=10 00:22:52.209 ioengine=libaio 00:22:52.209 direct=1 00:22:52.209 bs=262144 00:22:52.209 iodepth=64 00:22:52.209 norandommap=1 00:22:52.209 numjobs=1 00:22:52.209 00:22:52.209 [job0] 00:22:52.209 filename=/dev/nvme0n1 00:22:52.210 [job1] 00:22:52.210 filename=/dev/nvme10n1 00:22:52.210 [job2] 00:22:52.210 filename=/dev/nvme1n1 00:22:52.210 [job3] 00:22:52.210 filename=/dev/nvme2n1 00:22:52.210 [job4] 00:22:52.210 filename=/dev/nvme3n1 00:22:52.210 [job5] 00:22:52.210 filename=/dev/nvme4n1 00:22:52.210 [job6] 00:22:52.210 filename=/dev/nvme5n1 00:22:52.210 [job7] 00:22:52.210 filename=/dev/nvme6n1 00:22:52.210 [job8] 00:22:52.210 filename=/dev/nvme7n1 00:22:52.210 [job9] 00:22:52.210 filename=/dev/nvme8n1 00:22:52.210 [job10] 00:22:52.210 filename=/dev/nvme9n1 00:22:52.210 Could not set queue depth (nvme0n1) 00:22:52.210 Could not set queue depth (nvme10n1) 00:22:52.210 Could not set queue depth (nvme1n1) 00:22:52.210 Could not set queue depth (nvme2n1) 00:22:52.210 Could not set queue depth (nvme3n1) 00:22:52.210 Could not set queue depth (nvme4n1) 00:22:52.210 Could not set queue depth (nvme5n1) 00:22:52.210 Could not set queue depth (nvme6n1) 00:22:52.210 Could not set queue depth (nvme7n1) 00:22:52.210 Could not set queue depth (nvme8n1) 00:22:52.210 Could not set queue depth (nvme9n1) 00:22:52.210 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:52.210 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:52.210 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:52.210 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:52.210 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:52.210 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:52.210 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:52.210 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:52.210 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:52.210 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:52.210 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:52.210 fio-3.35 00:22:52.210 Starting 11 threads 00:23:02.176 00:23:02.176 job0: (groupid=0, jobs=1): err= 0: pid=3096830: Fri Nov 29 21:52:33 2024 00:23:02.176 write: IOPS=786, BW=197MiB/s (206MB/s)(1981MiB/10073msec); 0 zone resets 00:23:02.176 slat (usec): min=22, max=38666, avg=1255.89, stdev=3388.60 00:23:02.176 clat (msec): min=11, max=176, avg=80.07, stdev=19.91 00:23:02.176 lat (msec): min=11, max=176, avg=81.33, stdev=20.41 00:23:02.176 clat percentiles (msec): 00:23:02.176 | 1.00th=[ 51], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 55], 00:23:02.176 | 30.00th=[ 70], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 89], 00:23:02.176 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 108], 00:23:02.176 | 99.00th=[ 116], 99.50th=[ 126], 99.90th=[ 140], 99.95th=[ 176], 00:23:02.176 | 99.99th=[ 178] 00:23:02.176 bw ( KiB/s): min=154112, max=305664, per=5.75%, avg=201216.00, stdev=51911.32, samples=20 00:23:02.176 iops : min= 602, max= 1194, avg=786.00, stdev=202.78, samples=20 00:23:02.176 lat (msec) : 20=0.13%, 50=0.86%, 100=82.94%, 250=16.08% 00:23:02.176 cpu : usr=2.08%, sys=3.57%, ctx=1830, majf=0, minf=8 00:23:02.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:02.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:02.176 issued rwts: total=0,7923,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.176 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:02.176 job1: (groupid=0, jobs=1): err= 0: pid=3096842: Fri Nov 29 21:52:33 2024 00:23:02.176 write: IOPS=2620, BW=655MiB/s (687MB/s)(6560MiB/10013msec); 0 zone resets 00:23:02.176 slat (usec): min=16, max=5276, avg=378.90, stdev=745.78 00:23:02.176 clat (usec): min=9549, max=43133, avg=24036.32, stdev=8562.98 00:23:02.176 lat (usec): min=9590, max=43958, avg=24415.22, stdev=8682.14 00:23:02.176 clat percentiles (usec): 00:23:02.176 | 1.00th=[16581], 5.00th=[17171], 10.00th=[17433], 20.00th=[17957], 00:23:02.176 | 30.00th=[18220], 40.00th=[18482], 50.00th=[18744], 60.00th=[19268], 00:23:02.176 | 70.00th=[33424], 80.00th=[36439], 90.00th=[37487], 95.00th=[38011], 00:23:02.176 | 99.00th=[39060], 99.50th=[39584], 99.90th=[41157], 99.95th=[41681], 00:23:02.176 | 99.99th=[43254] 00:23:02.176 bw ( KiB/s): min=432640, max=888320, per=18.86%, avg=660345.26, stdev=207945.75, samples=19 00:23:02.176 iops : min= 1690, max= 3470, avg=2579.47, stdev=812.29, samples=19 00:23:02.176 lat (msec) : 10=0.03%, 20=66.99%, 50=32.98% 00:23:02.176 cpu : usr=4.08%, sys=6.57%, ctx=5527, majf=0, minf=209 00:23:02.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:02.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:02.176 issued rwts: total=0,26239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.176 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:02.176 job2: (groupid=0, jobs=1): err= 0: pid=3096843: Fri Nov 29 21:52:33 2024 00:23:02.176 write: IOPS=786, BW=197MiB/s (206MB/s)(1980MiB/10067msec); 0 zone resets 00:23:02.176 slat (usec): min=26, max=39421, avg=1257.88, stdev=3362.55 00:23:02.176 clat (msec): min=8, max=170, avg=80.07, stdev=20.26 00:23:02.176 lat (msec): min=8, max=170, avg=81.33, stdev=20.74 00:23:02.176 clat percentiles (msec): 00:23:02.176 | 1.00th=[ 51], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 55], 00:23:02.176 | 30.00th=[ 69], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 89], 00:23:02.176 | 70.00th=[ 90], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 108], 00:23:02.176 | 99.00th=[ 118], 99.50th=[ 136], 99.90th=[ 165], 99.95th=[ 167], 00:23:02.176 | 99.99th=[ 171] 00:23:02.176 bw ( KiB/s): min=141824, max=303616, per=5.74%, avg=201113.60, stdev=52710.12, samples=20 00:23:02.176 iops : min= 554, max= 1186, avg=785.60, stdev=205.90, samples=20 00:23:02.176 lat (msec) : 10=0.05%, 20=0.15%, 50=0.71%, 100=82.90%, 250=16.19% 00:23:02.176 cpu : usr=1.91%, sys=3.39%, ctx=1898, majf=0, minf=265 00:23:02.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:02.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:02.176 issued rwts: total=0,7919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.176 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:02.176 job3: (groupid=0, jobs=1): err= 0: pid=3096845: Fri Nov 29 21:52:33 2024 00:23:02.176 write: IOPS=1460, BW=365MiB/s (383MB/s)(3663MiB/10028msec); 0 zone resets 00:23:02.176 slat (usec): min=20, max=9744, avg=678.34, stdev=1222.96 00:23:02.176 clat (usec): min=13820, max=66544, avg=43114.55, stdev=8945.14 00:23:02.176 lat (usec): min=13892, max=66619, avg=43792.88, stdev=9052.48 00:23:02.176 clat percentiles (usec): 00:23:02.176 | 1.00th=[33162], 5.00th=[34866], 10.00th=[35390], 20.00th=[36439], 00:23:02.176 | 30.00th=[36963], 40.00th=[37487], 50.00th=[38011], 60.00th=[39060], 00:23:02.176 | 70.00th=[51643], 80.00th=[55313], 90.00th=[56886], 95.00th=[57934], 00:23:02.176 | 99.00th=[60556], 99.50th=[61604], 99.90th=[63701], 99.95th=[64226], 00:23:02.176 | 99.99th=[66323] 00:23:02.176 bw ( KiB/s): min=283136, max=436736, per=10.66%, avg=373427.20, stdev=68468.75, samples=20 00:23:02.176 iops : min= 1106, max= 1706, avg=1458.70, stdev=267.46, samples=20 00:23:02.176 lat (msec) : 20=0.12%, 50=69.25%, 100=30.63% 00:23:02.176 cpu : usr=3.05%, sys=5.62%, ctx=3606, majf=0, minf=199 00:23:02.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:02.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:02.176 issued rwts: total=0,14650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.176 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:02.176 job4: (groupid=0, jobs=1): err= 0: pid=3096846: Fri Nov 29 21:52:33 2024 00:23:02.176 write: IOPS=1666, BW=417MiB/s (437MB/s)(4179MiB/10028msec); 0 zone resets 00:23:02.176 slat (usec): min=17, max=13428, avg=594.85, stdev=1224.87 00:23:02.176 clat (usec): min=10446, max=84248, avg=37791.75, stdev=15919.03 00:23:02.176 lat (usec): min=10483, max=85059, avg=38386.60, stdev=16155.99 00:23:02.176 clat percentiles (usec): 00:23:02.176 | 1.00th=[16909], 5.00th=[17957], 10.00th=[18482], 20.00th=[19530], 00:23:02.176 | 30.00th=[35390], 40.00th=[36439], 50.00th=[37487], 60.00th=[38011], 00:23:02.176 | 70.00th=[38536], 80.00th=[39584], 90.00th=[70779], 95.00th=[72877], 00:23:02.176 | 99.00th=[77071], 99.50th=[78119], 99.90th=[81265], 99.95th=[81265], 00:23:02.176 | 99.99th=[84411] 00:23:02.176 bw ( KiB/s): min=218036, max=836096, per=12.17%, avg=426287.40, stdev=170139.51, samples=20 00:23:02.176 iops : min= 851, max= 3266, avg=1665.15, stdev=664.65, samples=20 00:23:02.176 lat (msec) : 20=22.11%, 50=62.75%, 100=15.14% 00:23:02.176 cpu : usr=3.31%, sys=5.18%, ctx=3898, majf=0, minf=139 00:23:02.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:02.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:02.176 issued rwts: total=0,16714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.176 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:02.176 job5: (groupid=0, jobs=1): err= 0: pid=3096847: Fri Nov 29 21:52:33 2024 00:23:02.176 write: IOPS=782, BW=196MiB/s (205MB/s)(1970MiB/10072msec); 0 zone resets 00:23:02.176 slat (usec): min=22, max=29349, avg=1217.58, stdev=3081.97 00:23:02.176 clat (msec): min=8, max=164, avg=80.55, stdev=20.64 00:23:02.176 lat (msec): min=8, max=164, avg=81.77, stdev=21.12 00:23:02.176 clat percentiles (msec): 00:23:02.176 | 1.00th=[ 21], 5.00th=[ 38], 10.00th=[ 52], 20.00th=[ 70], 00:23:02.176 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 87], 60.00th=[ 89], 00:23:02.176 | 70.00th=[ 90], 80.00th=[ 93], 90.00th=[ 106], 95.00th=[ 108], 00:23:02.176 | 99.00th=[ 116], 99.50th=[ 127], 99.90th=[ 155], 99.95th=[ 157], 00:23:02.176 | 99.99th=[ 165] 00:23:02.176 bw ( KiB/s): min=147968, max=397824, per=5.71%, avg=200089.60, stdev=55304.88, samples=20 00:23:02.176 iops : min= 578, max= 1554, avg=781.60, stdev=216.03, samples=20 00:23:02.176 lat (msec) : 10=0.10%, 20=0.79%, 50=9.02%, 100=73.77%, 250=16.32% 00:23:02.176 cpu : usr=1.84%, sys=3.31%, ctx=1974, majf=0, minf=13 00:23:02.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:02.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:02.176 issued rwts: total=0,7880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.176 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:02.176 job6: (groupid=0, jobs=1): err= 0: pid=3096848: Fri Nov 29 21:52:33 2024 00:23:02.176 write: IOPS=1294, BW=324MiB/s (339MB/s)(3245MiB/10027msec); 0 zone resets 00:23:02.176 slat (usec): min=22, max=60161, avg=744.31, stdev=2328.33 00:23:02.176 clat (msec): min=7, max=168, avg=48.69, stdev=20.67 00:23:02.177 lat (msec): min=7, max=168, avg=49.43, stdev=21.04 00:23:02.177 clat percentiles (msec): 00:23:02.177 | 1.00th=[ 33], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 37], 00:23:02.177 | 30.00th=[ 38], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 39], 00:23:02.177 | 70.00th=[ 41], 80.00th=[ 70], 90.00th=[ 75], 95.00th=[ 104], 00:23:02.177 | 99.00th=[ 109], 99.50th=[ 110], 99.90th=[ 127], 99.95th=[ 163], 00:23:02.177 | 99.99th=[ 169] 00:23:02.177 bw ( KiB/s): min=153088, max=436224, per=9.44%, avg=330624.00, stdev=106616.01, samples=20 00:23:02.177 iops : min= 598, max= 1704, avg=1291.50, stdev=416.47, samples=20 00:23:02.177 lat (msec) : 10=0.05%, 20=0.32%, 50=72.35%, 100=21.68%, 250=5.60% 00:23:02.177 cpu : usr=2.74%, sys=4.61%, ctx=3232, majf=0, minf=86 00:23:02.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:02.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:02.177 issued rwts: total=0,12978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.177 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:02.177 job7: (groupid=0, jobs=1): err= 0: pid=3096849: Fri Nov 29 21:52:33 2024 00:23:02.177 write: IOPS=1433, BW=358MiB/s (376MB/s)(3594MiB/10028msec); 0 zone resets 00:23:02.177 slat (usec): min=22, max=21821, avg=685.78, stdev=1304.51 00:23:02.177 clat (usec): min=4428, max=90397, avg=43944.94, stdev=10637.61 00:23:02.177 lat (usec): min=4496, max=90452, avg=44630.72, stdev=10779.32 00:23:02.177 clat percentiles (usec): 00:23:02.177 | 1.00th=[29492], 5.00th=[34341], 10.00th=[35390], 20.00th=[36439], 00:23:02.177 | 30.00th=[36963], 40.00th=[37487], 50.00th=[38011], 60.00th=[39060], 00:23:02.177 | 70.00th=[53740], 80.00th=[55837], 90.00th=[57934], 95.00th=[60031], 00:23:02.177 | 99.00th=[72877], 99.50th=[74974], 99.90th=[78119], 99.95th=[85459], 00:23:02.177 | 99.99th=[90702] 00:23:02.177 bw ( KiB/s): min=282624, max=439296, per=10.46%, avg=366412.80, stdev=68392.94, samples=20 00:23:02.177 iops : min= 1104, max= 1716, avg=1431.30, stdev=267.16, samples=20 00:23:02.177 lat (msec) : 10=0.11%, 20=0.45%, 50=65.05%, 100=34.40% 00:23:02.177 cpu : usr=3.13%, sys=4.76%, ctx=3530, majf=0, minf=81 00:23:02.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:02.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:02.177 issued rwts: total=0,14376,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.177 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:02.177 job8: (groupid=0, jobs=1): err= 0: pid=3096850: Fri Nov 29 21:52:33 2024 00:23:02.177 write: IOPS=1378, BW=345MiB/s (361MB/s)(3469MiB/10063msec); 0 zone resets 00:23:02.177 slat (usec): min=21, max=49537, avg=699.05, stdev=1635.88 00:23:02.177 clat (msec): min=2, max=165, avg=45.71, stdev=15.81 00:23:02.177 lat (msec): min=2, max=165, avg=46.41, stdev=16.03 00:23:02.177 clat percentiles (msec): 00:23:02.177 | 1.00th=[ 25], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 37], 00:23:02.177 | 30.00th=[ 38], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 40], 00:23:02.177 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 58], 95.00th=[ 61], 00:23:02.177 | 99.00th=[ 110], 99.50th=[ 114], 99.90th=[ 155], 99.95th=[ 161], 00:23:02.177 | 99.99th=[ 165] 00:23:02.177 bw ( KiB/s): min=138240, max=448000, per=10.10%, avg=353517.85, stdev=91893.60, samples=20 00:23:02.177 iops : min= 540, max= 1750, avg=1380.90, stdev=358.93, samples=20 00:23:02.177 lat (msec) : 4=0.03%, 10=0.12%, 20=0.65%, 50=63.64%, 100=32.16% 00:23:02.177 lat (msec) : 250=3.41% 00:23:02.177 cpu : usr=3.08%, sys=5.18%, ctx=3506, majf=0, minf=23 00:23:02.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:02.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:02.177 issued rwts: total=0,13874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.177 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:02.177 job9: (groupid=0, jobs=1): err= 0: pid=3096852: Fri Nov 29 21:52:33 2024 00:23:02.177 write: IOPS=728, BW=182MiB/s (191MB/s)(1835MiB/10074msec); 0 zone resets 00:23:02.177 slat (usec): min=25, max=45167, avg=1343.67, stdev=3722.83 00:23:02.177 clat (msec): min=10, max=182, avg=86.46, stdev=14.20 00:23:02.177 lat (msec): min=10, max=182, avg=87.81, stdev=14.75 00:23:02.177 clat percentiles (msec): 00:23:02.177 | 1.00th=[ 65], 5.00th=[ 70], 10.00th=[ 71], 20.00th=[ 72], 00:23:02.177 | 30.00th=[ 77], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 89], 00:23:02.177 | 70.00th=[ 91], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 109], 00:23:02.177 | 99.00th=[ 117], 99.50th=[ 128], 99.90th=[ 167], 99.95th=[ 171], 00:23:02.177 | 99.99th=[ 182] 00:23:02.177 bw ( KiB/s): min=151552, max=227328, per=5.32%, avg=186313.40, stdev=26575.55, samples=20 00:23:02.177 iops : min= 592, max= 888, avg=727.75, stdev=103.76, samples=20 00:23:02.177 lat (msec) : 20=0.16%, 50=0.38%, 100=81.92%, 250=17.53% 00:23:02.177 cpu : usr=1.48%, sys=3.23%, ctx=1783, majf=0, minf=144 00:23:02.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:23:02.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:02.177 issued rwts: total=0,7340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.177 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:02.177 job10: (groupid=0, jobs=1): err= 0: pid=3096857: Fri Nov 29 21:52:33 2024 00:23:02.177 write: IOPS=784, BW=196MiB/s (206MB/s)(1976MiB/10071msec); 0 zone resets 00:23:02.177 slat (usec): min=26, max=42258, avg=1261.92, stdev=3219.70 00:23:02.177 clat (msec): min=8, max=163, avg=80.27, stdev=20.17 00:23:02.177 lat (msec): min=8, max=163, avg=81.53, stdev=20.63 00:23:02.177 clat percentiles (msec): 00:23:02.177 | 1.00th=[ 51], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 55], 00:23:02.177 | 30.00th=[ 68], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 89], 00:23:02.177 | 70.00th=[ 91], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 109], 00:23:02.177 | 99.00th=[ 117], 99.50th=[ 124], 99.90th=[ 157], 99.95th=[ 161], 00:23:02.177 | 99.99th=[ 163] 00:23:02.177 bw ( KiB/s): min=143872, max=302592, per=5.73%, avg=200704.00, stdev=52597.81, samples=20 00:23:02.177 iops : min= 562, max= 1182, avg=784.00, stdev=205.46, samples=20 00:23:02.177 lat (msec) : 10=0.06%, 20=0.16%, 50=0.78%, 100=82.48%, 250=16.51% 00:23:02.177 cpu : usr=1.91%, sys=3.33%, ctx=1903, majf=0, minf=12 00:23:02.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:02.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:02.177 issued rwts: total=0,7903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.177 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:02.177 00:23:02.177 Run status group 0 (all jobs): 00:23:02.177 WRITE: bw=3420MiB/s (3586MB/s), 182MiB/s-655MiB/s (191MB/s-687MB/s), io=33.6GiB (36.1GB), run=10013-10074msec 00:23:02.177 00:23:02.177 Disk stats (read/write): 00:23:02.177 nvme0n1: ios=49/15703, merge=0/0, ticks=8/1229983, in_queue=1229991, util=95.46% 00:23:02.177 nvme10n1: ios=0/52291, merge=0/0, ticks=0/1242634, in_queue=1242634, util=95.75% 00:23:02.177 nvme1n1: ios=0/15701, merge=0/0, ticks=0/1228495, in_queue=1228495, util=96.23% 00:23:02.177 nvme2n1: ios=0/29146, merge=0/0, ticks=0/1233536, in_queue=1233536, util=96.49% 00:23:02.177 nvme3n1: ios=0/33283, merge=0/0, ticks=0/1235148, in_queue=1235148, util=96.63% 00:23:02.177 nvme4n1: ios=0/15632, merge=0/0, ticks=0/1230905, in_queue=1230905, util=97.21% 00:23:02.177 nvme5n1: ios=0/25803, merge=0/0, ticks=0/1235788, in_queue=1235788, util=97.49% 00:23:02.177 nvme6n1: ios=0/28603, merge=0/0, ticks=0/1234038, in_queue=1234038, util=97.70% 00:23:02.177 nvme7n1: ios=0/27613, merge=0/0, ticks=0/1228574, in_queue=1228574, util=98.41% 00:23:02.177 nvme8n1: ios=0/14553, merge=0/0, ticks=0/1228791, in_queue=1228791, util=98.75% 00:23:02.177 nvme9n1: ios=0/15678, merge=0/0, ticks=0/1227647, in_queue=1227647, util=98.98% 00:23:02.177 21:52:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:23:02.177 21:52:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:23:02.177 21:52:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:02.177 21:52:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:02.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:02.742 21:52:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:02.742 21:52:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:02.742 21:52:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:02.742 21:52:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:23:02.999 21:52:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:02.999 21:52:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:23:02.999 21:52:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:02.999 21:52:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:02.999 21:52:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.999 21:52:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:02.999 21:52:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.999 21:52:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:02.999 21:52:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:03.931 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:03.931 21:52:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:03.931 21:52:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:03.931 21:52:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:03.931 21:52:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:23:03.931 21:52:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:03.931 21:52:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:23:03.931 21:52:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:03.931 21:52:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:03.931 21:52:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.931 21:52:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:03.931 21:52:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.931 21:52:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:03.931 21:52:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:04.864 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:04.864 21:52:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:04.864 21:52:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:04.864 21:52:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:04.864 21:52:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:23:04.864 21:52:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:04.864 21:52:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:23:04.864 21:52:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:04.864 21:52:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:04.864 21:52:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.864 21:52:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:04.864 21:52:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.864 21:52:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:04.864 21:52:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:06.235 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:06.235 21:52:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:06.235 21:52:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:06.235 21:52:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:06.235 21:52:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:23:06.235 21:52:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:23:06.235 21:52:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:06.235 21:52:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:06.235 21:52:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:06.235 21:52:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.235 21:52:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:06.235 21:52:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.235 21:52:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.235 21:52:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:07.168 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:07.168 21:52:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:07.168 21:52:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:07.168 21:52:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:07.168 21:52:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:23:07.168 21:52:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:07.168 21:52:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:23:07.168 21:52:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:07.168 21:52:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:07.168 21:52:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.168 21:52:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:07.168 21:52:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.168 21:52:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:07.168 21:52:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:08.102 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:08.102 21:52:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:08.102 21:52:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:08.102 21:52:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:08.102 21:52:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:23:08.102 21:52:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:08.102 21:52:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:23:08.102 21:52:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:08.102 21:52:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:08.102 21:52:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.102 21:52:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:08.102 21:52:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.102 21:52:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:08.102 21:52:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:09.036 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:09.036 21:52:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:09.036 21:52:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:09.036 21:52:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:09.036 21:52:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:23:09.036 21:52:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:09.036 21:52:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:23:09.036 21:52:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:09.036 21:52:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:09.036 21:52:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.036 21:52:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.036 21:52:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.036 21:52:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.036 21:52:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:09.969 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:09.969 21:52:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:09.969 21:52:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:09.969 21:52:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:09.969 21:52:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:23:09.969 21:52:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:09.969 21:52:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:23:09.969 21:52:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:09.969 21:52:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:09.969 21:52:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.969 21:52:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:09.969 21:52:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.969 21:52:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:09.969 21:52:42 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:10.903 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:10.903 21:52:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:10.903 21:52:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:10.903 21:52:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:10.903 21:52:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:23:10.903 21:52:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:23:10.903 21:52:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:10.903 21:52:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:10.903 21:52:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:10.903 21:52:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.903 21:52:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:10.903 21:52:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.904 21:52:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:10.904 21:52:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:12.276 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:12.276 21:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:12.276 21:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:12.276 21:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:12.276 21:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:23:12.276 21:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:12.276 21:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:23:12.276 21:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:12.276 21:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:12.276 21:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.276 21:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:12.276 21:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.276 21:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:12.277 21:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:13.208 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:13.208 rmmod nvme_rdma 00:23:13.208 rmmod nvme_fabrics 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 3088852 ']' 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 3088852 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 3088852 ']' 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 3088852 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3088852 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3088852' 00:23:13.208 killing process with pid 3088852 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 3088852 00:23:13.208 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 3088852 00:23:13.773 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:13.774 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:23:13.774 00:23:13.774 real 1m15.209s 00:23:13.774 user 4m52.795s 00:23:13.774 sys 0m19.936s 00:23:13.774 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:13.774 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:13.774 ************************************ 00:23:13.774 END TEST nvmf_multiconnection 00:23:13.774 ************************************ 00:23:13.774 21:52:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:13.774 21:52:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:13.774 21:52:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:13.774 21:52:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:13.774 ************************************ 00:23:13.774 START TEST nvmf_initiator_timeout 00:23:13.774 ************************************ 00:23:13.774 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:13.774 * Looking for test storage... 00:23:13.774 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:13.774 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:13.774 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:23:13.774 21:52:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:13.774 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:14.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.033 --rc genhtml_branch_coverage=1 00:23:14.033 --rc genhtml_function_coverage=1 00:23:14.033 --rc genhtml_legend=1 00:23:14.033 --rc geninfo_all_blocks=1 00:23:14.033 --rc geninfo_unexecuted_blocks=1 00:23:14.033 00:23:14.033 ' 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:14.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.033 --rc genhtml_branch_coverage=1 00:23:14.033 --rc genhtml_function_coverage=1 00:23:14.033 --rc genhtml_legend=1 00:23:14.033 --rc geninfo_all_blocks=1 00:23:14.033 --rc geninfo_unexecuted_blocks=1 00:23:14.033 00:23:14.033 ' 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:14.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.033 --rc genhtml_branch_coverage=1 00:23:14.033 --rc genhtml_function_coverage=1 00:23:14.033 --rc genhtml_legend=1 00:23:14.033 --rc geninfo_all_blocks=1 00:23:14.033 --rc geninfo_unexecuted_blocks=1 00:23:14.033 00:23:14.033 ' 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:14.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.033 --rc genhtml_branch_coverage=1 00:23:14.033 --rc genhtml_function_coverage=1 00:23:14.033 --rc genhtml_legend=1 00:23:14.033 --rc geninfo_all_blocks=1 00:23:14.033 --rc geninfo_unexecuted_blocks=1 00:23:14.033 00:23:14.033 ' 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.033 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:14.033 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:23:14.034 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.034 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:14.034 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:14.034 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:14.034 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.034 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.034 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.034 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:14.034 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:14.034 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.034 21:52:46 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:23:20.596 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:20.597 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:20.597 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:20.597 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:20.597 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # is_hw=yes 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # rdma_device_init 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@526 -- # allocate_nic_ips 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:20.597 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:20.597 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:20.597 altname enp217s0f0np0 00:23:20.597 altname ens818f0np0 00:23:20.597 inet 192.168.100.8/24 scope global mlx_0_0 00:23:20.597 valid_lft forever preferred_lft forever 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:20.597 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:20.597 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:20.597 altname enp217s0f1np1 00:23:20.597 altname ens818f1np1 00:23:20.597 inet 192.168.100.9/24 scope global mlx_0_1 00:23:20.597 valid_lft forever preferred_lft forever 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # return 0 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:20.597 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:20.598 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:23:20.857 192.168.100.9' 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:23:20.857 192.168.100.9' 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # head -n 1 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # tail -n +2 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:23:20.857 192.168.100.9' 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # head -n 1 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=3103586 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 3103586 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 3103586 ']' 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.857 21:52:52 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:20.857 [2024-11-29 21:52:52.961840] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:20.857 [2024-11-29 21:52:52.961902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.857 [2024-11-29 21:52:53.034071] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.857 [2024-11-29 21:52:53.074933] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.857 [2024-11-29 21:52:53.074977] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.857 [2024-11-29 21:52:53.074986] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.857 [2024-11-29 21:52:53.074994] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.857 [2024-11-29 21:52:53.075001] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.857 [2024-11-29 21:52:53.075098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.857 [2024-11-29 21:52:53.075212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.857 [2024-11-29 21:52:53.075278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.858 [2024-11-29 21:52:53.075280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:21.116 Malloc0 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:21.116 Delay0 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.116 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:21.116 [2024-11-29 21:52:53.312435] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x225ff80/0x2152a00) succeed. 00:23:21.116 [2024-11-29 21:52:53.323242] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2261570/0x21940a0) succeed. 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:21.378 [2024-11-29 21:52:53.467945] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.378 21:52:53 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:23:22.452 21:52:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:22.452 21:52:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:23:22.452 21:52:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:22.452 21:52:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:22.452 21:52:54 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:23:24.351 21:52:56 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:24.351 21:52:56 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:24.351 21:52:56 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:23:24.351 21:52:56 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:24.351 21:52:56 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:24.351 21:52:56 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:23:24.351 21:52:56 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3104202 00:23:24.351 21:52:56 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:24.351 21:52:56 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:24.351 [global] 00:23:24.351 thread=1 00:23:24.351 invalidate=1 00:23:24.351 rw=write 00:23:24.351 time_based=1 00:23:24.351 runtime=60 00:23:24.351 ioengine=libaio 00:23:24.351 direct=1 00:23:24.351 bs=4096 00:23:24.351 iodepth=1 00:23:24.351 norandommap=0 00:23:24.351 numjobs=1 00:23:24.351 00:23:24.351 verify_dump=1 00:23:24.351 verify_backlog=512 00:23:24.351 verify_state_save=0 00:23:24.351 do_verify=1 00:23:24.351 verify=crc32c-intel 00:23:24.351 [job0] 00:23:24.351 filename=/dev/nvme0n1 00:23:24.351 Could not set queue depth (nvme0n1) 00:23:24.609 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:24.609 fio-3.35 00:23:24.609 Starting 1 thread 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:27.893 true 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:27.893 true 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:27.893 true 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:27.893 true 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.893 21:52:59 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:30.427 true 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:30.427 true 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:30.427 true 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:30.427 true 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:30.427 21:53:02 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3104202 00:24:26.648 00:24:26.648 job0: (groupid=0, jobs=1): err= 0: pid=3104464: Fri Nov 29 21:53:56 2024 00:24:26.648 read: IOPS=1288, BW=5154KiB/s (5278kB/s)(302MiB/60000msec) 00:24:26.648 slat (usec): min=2, max=19073, avg= 8.97, stdev=89.13 00:24:26.648 clat (usec): min=74, max=42350k, avg=650.50, stdev=152311.58 00:24:26.648 lat (usec): min=85, max=42350k, avg=659.47, stdev=152311.61 00:24:26.648 clat percentiles (usec): 00:24:26.648 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 98], 00:24:26.648 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 104], 00:24:26.648 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 114], 00:24:26.648 | 99.00th=[ 119], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 133], 00:24:26.648 | 99.99th=[ 249] 00:24:26.648 write: IOPS=1295, BW=5182KiB/s (5306kB/s)(304MiB/60000msec); 0 zone resets 00:24:26.648 slat (usec): min=3, max=296, avg=10.98, stdev= 3.18 00:24:26.648 clat (usec): min=71, max=307, avg=100.29, stdev= 6.65 00:24:26.648 lat (usec): min=82, max=405, avg=111.27, stdev= 7.88 00:24:26.648 clat percentiles (usec): 00:24:26.648 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 95], 00:24:26.648 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 100], 60.00th=[ 102], 00:24:26.648 | 70.00th=[ 103], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 112], 00:24:26.648 | 99.00th=[ 117], 99.50th=[ 119], 99.90th=[ 129], 99.95th=[ 141], 00:24:26.648 | 99.99th=[ 239] 00:24:26.648 bw ( KiB/s): min= 2304, max=20480, per=100.00%, avg=16839.11, stdev=3544.10, samples=36 00:24:26.648 iops : min= 576, max= 5120, avg=4209.78, stdev=886.02, samples=36 00:24:26.648 lat (usec) : 100=42.11%, 250=57.88%, 500=0.01% 00:24:26.648 lat (msec) : >=2000=0.01% 00:24:26.648 cpu : usr=1.86%, sys=3.10%, ctx=155051, majf=0, minf=108 00:24:26.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:26.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.648 issued rwts: total=77312,77731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:26.648 00:24:26.648 Run status group 0 (all jobs): 00:24:26.648 READ: bw=5154KiB/s (5278kB/s), 5154KiB/s-5154KiB/s (5278kB/s-5278kB/s), io=302MiB (317MB), run=60000-60000msec 00:24:26.648 WRITE: bw=5182KiB/s (5306kB/s), 5182KiB/s-5182KiB/s (5306kB/s-5306kB/s), io=304MiB (318MB), run=60000-60000msec 00:24:26.648 00:24:26.648 Disk stats (read/write): 00:24:26.648 nvme0n1: ios=77165/77312, merge=0/0, ticks=7409/7120, in_queue=14529, util=99.84% 00:24:26.648 21:53:56 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:26.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:26.648 nvmf hotplug test: fio successful as expected 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:26.648 21:53:57 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:26.648 rmmod nvme_rdma 00:24:26.648 rmmod nvme_fabrics 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 3103586 ']' 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 3103586 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 3103586 ']' 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 3103586 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3103586 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3103586' 00:24:26.648 killing process with pid 3103586 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 3103586 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 3103586 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:24:26.648 00:24:26.648 real 1m12.541s 00:24:26.648 user 4m31.285s 00:24:26.648 sys 0m7.871s 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:26.648 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:26.649 ************************************ 00:24:26.649 END TEST nvmf_initiator_timeout 00:24:26.649 ************************************ 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:26.649 ************************************ 00:24:26.649 START TEST nvmf_srq_overwhelm 00:24:26.649 ************************************ 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:24:26.649 * Looking for test storage... 00:24:26.649 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1681 -- # lcov --version 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:26.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.649 --rc genhtml_branch_coverage=1 00:24:26.649 --rc genhtml_function_coverage=1 00:24:26.649 --rc genhtml_legend=1 00:24:26.649 --rc geninfo_all_blocks=1 00:24:26.649 --rc geninfo_unexecuted_blocks=1 00:24:26.649 00:24:26.649 ' 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:26.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.649 --rc genhtml_branch_coverage=1 00:24:26.649 --rc genhtml_function_coverage=1 00:24:26.649 --rc genhtml_legend=1 00:24:26.649 --rc geninfo_all_blocks=1 00:24:26.649 --rc geninfo_unexecuted_blocks=1 00:24:26.649 00:24:26.649 ' 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:26.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.649 --rc genhtml_branch_coverage=1 00:24:26.649 --rc genhtml_function_coverage=1 00:24:26.649 --rc genhtml_legend=1 00:24:26.649 --rc geninfo_all_blocks=1 00:24:26.649 --rc geninfo_unexecuted_blocks=1 00:24:26.649 00:24:26.649 ' 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:26.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.649 --rc genhtml_branch_coverage=1 00:24:26.649 --rc genhtml_function_coverage=1 00:24:26.649 --rc genhtml_legend=1 00:24:26.649 --rc geninfo_all_blocks=1 00:24:26.649 --rc geninfo_unexecuted_blocks=1 00:24:26.649 00:24:26.649 ' 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.649 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:26.650 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.650 21:53:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:33.217 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:33.217 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:33.217 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:33.217 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # is_hw=yes 00:24:33.217 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # rdma_device_init 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@526 -- # allocate_nic_ips 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:33.218 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:33.218 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:33.218 altname enp217s0f0np0 00:24:33.218 altname ens818f0np0 00:24:33.218 inet 192.168.100.8/24 scope global mlx_0_0 00:24:33.218 valid_lft forever preferred_lft forever 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:33.218 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:33.218 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:33.218 altname enp217s0f1np1 00:24:33.218 altname ens818f1np1 00:24:33.218 inet 192.168.100.9/24 scope global mlx_0_1 00:24:33.218 valid_lft forever preferred_lft forever 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # return 0 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:24:33.218 192.168.100.9' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:24:33.218 192.168.100.9' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # head -n 1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:24:33.218 192.168.100.9' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # tail -n +2 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # head -n 1 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:24:33.218 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:24:33.478 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:24:33.478 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:33.478 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:33.478 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:33.478 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@505 -- # nvmfpid=3118375 00:24:33.478 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:33.478 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@506 -- # waitforlisten 3118375 00:24:33.478 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # '[' -z 3118375 ']' 00:24:33.478 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.478 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:33.478 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.478 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:33.478 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:33.478 [2024-11-29 21:54:05.539053] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:33.478 [2024-11-29 21:54:05.539121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.478 [2024-11-29 21:54:05.611369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.478 [2024-11-29 21:54:05.652679] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.478 [2024-11-29 21:54:05.652724] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.478 [2024-11-29 21:54:05.652733] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.478 [2024-11-29 21:54:05.652741] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.478 [2024-11-29 21:54:05.652748] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.478 [2024-11-29 21:54:05.652797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.478 [2024-11-29 21:54:05.652895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.478 [2024-11-29 21:54:05.652992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.478 [2024-11-29 21:54:05.652994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # return 0 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:33.737 [2024-11-29 21:54:05.829099] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x530f50/0x535400) succeed. 00:24:33.737 [2024-11-29 21:54:05.839174] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x532540/0x576aa0) succeed. 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:33.737 Malloc0 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:33.737 [2024-11-29 21:54:05.937807] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.737 21:54:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:35.115 Malloc1 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.115 21:54:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:36.052 21:54:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:24:36.052 21:54:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:24:36.052 21:54:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:36.052 21:54:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:24:36.052 21:54:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:36.052 21:54:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:36.052 Malloc2 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.052 21:54:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:24:36.988 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:24:36.988 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:24:36.988 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:36.988 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:24:36.988 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:36.988 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:24:36.988 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:24:36.988 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:36.988 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:24:36.988 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.988 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:36.988 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.988 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:36.989 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.989 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:36.989 Malloc3 00:24:36.989 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.989 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:36.989 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.989 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:36.989 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.989 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:24:36.989 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.989 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:36.989 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.989 21:54:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:37.926 Malloc4 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.926 21:54:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:24:39.304 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:24:39.304 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:24:39.304 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:39.305 Malloc5 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.305 21:54:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:24:40.241 21:54:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:24:40.241 21:54:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:24:40.241 21:54:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:40.241 21:54:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:24:40.241 21:54:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:24:40.241 21:54:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:40.241 21:54:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:24:40.241 21:54:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:24:40.241 [global] 00:24:40.241 thread=1 00:24:40.241 invalidate=1 00:24:40.241 rw=read 00:24:40.241 time_based=1 00:24:40.241 runtime=10 00:24:40.241 ioengine=libaio 00:24:40.241 direct=1 00:24:40.241 bs=1048576 00:24:40.242 iodepth=128 00:24:40.242 norandommap=1 00:24:40.242 numjobs=13 00:24:40.242 00:24:40.242 [job0] 00:24:40.242 filename=/dev/nvme0n1 00:24:40.242 [job1] 00:24:40.242 filename=/dev/nvme1n1 00:24:40.242 [job2] 00:24:40.242 filename=/dev/nvme2n1 00:24:40.242 [job3] 00:24:40.242 filename=/dev/nvme3n1 00:24:40.242 [job4] 00:24:40.242 filename=/dev/nvme4n1 00:24:40.242 [job5] 00:24:40.242 filename=/dev/nvme5n1 00:24:40.242 Could not set queue depth (nvme0n1) 00:24:40.242 Could not set queue depth (nvme1n1) 00:24:40.242 Could not set queue depth (nvme2n1) 00:24:40.242 Could not set queue depth (nvme3n1) 00:24:40.242 Could not set queue depth (nvme4n1) 00:24:40.242 Could not set queue depth (nvme5n1) 00:24:40.500 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:40.500 ... 00:24:40.500 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:40.500 ... 00:24:40.500 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:40.500 ... 00:24:40.500 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:40.500 ... 00:24:40.500 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:40.500 ... 00:24:40.500 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:24:40.500 ... 00:24:40.500 fio-3.35 00:24:40.500 Starting 78 threads 00:24:52.710 00:24:52.710 job0: (groupid=0, jobs=1): err= 0: pid=3119790: Fri Nov 29 21:54:23 2024 00:24:52.710 read: IOPS=119, BW=120MiB/s (125MB/s)(1212MiB/10128msec) 00:24:52.710 slat (usec): min=43, max=136677, avg=8256.88, stdev=16203.89 00:24:52.710 clat (msec): min=114, max=3426, avg=1026.73, stdev=907.64 00:24:52.710 lat (msec): min=131, max=3432, avg=1034.99, stdev=912.14 00:24:52.710 clat percentiles (msec): 00:24:52.710 | 1.00th=[ 255], 5.00th=[ 259], 10.00th=[ 262], 20.00th=[ 321], 00:24:52.710 | 30.00th=[ 514], 40.00th=[ 518], 50.00th=[ 542], 60.00th=[ 743], 00:24:52.710 | 70.00th=[ 986], 80.00th=[ 1955], 90.00th=[ 2601], 95.00th=[ 2970], 00:24:52.710 | 99.00th=[ 3339], 99.50th=[ 3373], 99.90th=[ 3406], 99.95th=[ 3440], 00:24:52.710 | 99.99th=[ 3440] 00:24:52.710 bw ( KiB/s): min=20480, max=460800, per=3.03%, avg=116791.37, stdev=122804.25, samples=19 00:24:52.710 iops : min= 20, max= 450, avg=113.89, stdev=119.93, samples=19 00:24:52.710 lat (msec) : 250=0.33%, 500=26.98%, 750=33.09%, 1000=10.89%, 2000=8.99% 00:24:52.710 lat (msec) : >=2000=19.72% 00:24:52.710 cpu : usr=0.07%, sys=2.29%, ctx=2567, majf=0, minf=32769 00:24:52.710 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:24:52.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.710 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.710 issued rwts: total=1212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.710 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.710 job0: (groupid=0, jobs=1): err= 0: pid=3119791: Fri Nov 29 21:54:23 2024 00:24:52.710 read: IOPS=107, BW=107MiB/s (113MB/s)(1123MiB/10467msec) 00:24:52.710 slat (usec): min=41, max=2062.4k, avg=8934.04, stdev=62371.15 00:24:52.710 clat (msec): min=426, max=6419, avg=1140.35, stdev=984.44 00:24:52.710 lat (msec): min=492, max=6425, avg=1149.28, stdev=992.54 00:24:52.710 clat percentiles (msec): 00:24:52.710 | 1.00th=[ 523], 5.00th=[ 527], 10.00th=[ 531], 20.00th=[ 535], 00:24:52.710 | 30.00th=[ 542], 40.00th=[ 651], 50.00th=[ 659], 60.00th=[ 684], 00:24:52.710 | 70.00th=[ 986], 80.00th=[ 1754], 90.00th=[ 2903], 95.00th=[ 3373], 00:24:52.710 | 99.00th=[ 4212], 99.50th=[ 4279], 99.90th=[ 6409], 99.95th=[ 6409], 00:24:52.710 | 99.99th=[ 6409] 00:24:52.710 bw ( KiB/s): min=18432, max=249856, per=3.31%, avg=127370.06, stdev=89941.28, samples=16 00:24:52.710 iops : min= 18, max= 244, avg=124.25, stdev=87.84, samples=16 00:24:52.710 lat (msec) : 500=0.18%, 750=67.94%, 1000=2.05%, 2000=10.95%, >=2000=18.88% 00:24:52.710 cpu : usr=0.10%, sys=2.18%, ctx=1525, majf=0, minf=32769 00:24:52.710 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:24:52.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.710 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.710 issued rwts: total=1123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.710 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.710 job0: (groupid=0, jobs=1): err= 0: pid=3119792: Fri Nov 29 21:54:23 2024 00:24:52.710 read: IOPS=182, BW=182MiB/s (191MB/s)(1842MiB/10099msec) 00:24:52.710 slat (usec): min=41, max=100704, avg=5429.66, stdev=11947.00 00:24:52.710 clat (msec): min=84, max=1060, avg=666.99, stdev=204.81 00:24:52.710 lat (msec): min=168, max=1065, avg=672.42, stdev=205.95 00:24:52.710 clat percentiles (msec): 00:24:52.711 | 1.00th=[ 251], 5.00th=[ 257], 10.00th=[ 359], 20.00th=[ 527], 00:24:52.711 | 30.00th=[ 567], 40.00th=[ 634], 50.00th=[ 676], 60.00th=[ 718], 00:24:52.711 | 70.00th=[ 827], 80.00th=[ 869], 90.00th=[ 911], 95.00th=[ 961], 00:24:52.711 | 99.00th=[ 1020], 99.50th=[ 1045], 99.90th=[ 1062], 99.95th=[ 1062], 00:24:52.711 | 99.99th=[ 1062] 00:24:52.711 bw ( KiB/s): min=83968, max=382976, per=4.80%, avg=184630.79, stdev=72471.22, samples=19 00:24:52.711 iops : min= 82, max= 374, avg=180.21, stdev=70.76, samples=19 00:24:52.711 lat (msec) : 100=0.05%, 250=0.87%, 500=16.83%, 750=46.53%, 1000=33.66% 00:24:52.711 lat (msec) : 2000=2.06% 00:24:52.711 cpu : usr=0.16%, sys=2.73%, ctx=1832, majf=0, minf=32769 00:24:52.711 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:24:52.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.711 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.711 issued rwts: total=1842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.711 job0: (groupid=0, jobs=1): err= 0: pid=3119793: Fri Nov 29 21:54:23 2024 00:24:52.711 read: IOPS=119, BW=119MiB/s (125MB/s)(1199MiB/10070msec) 00:24:52.711 slat (usec): min=40, max=2087.1k, avg=8343.64, stdev=78583.38 00:24:52.711 clat (msec): min=57, max=5768, avg=582.40, stdev=588.49 00:24:52.711 lat (msec): min=76, max=5797, avg=590.75, stdev=609.42 00:24:52.711 clat percentiles (msec): 00:24:52.711 | 1.00th=[ 163], 5.00th=[ 253], 10.00th=[ 255], 20.00th=[ 255], 00:24:52.711 | 30.00th=[ 259], 40.00th=[ 266], 50.00th=[ 355], 60.00th=[ 384], 00:24:52.711 | 70.00th=[ 609], 80.00th=[ 961], 90.00th=[ 1083], 95.00th=[ 1636], 00:24:52.711 | 99.00th=[ 2089], 99.50th=[ 4111], 99.90th=[ 5738], 99.95th=[ 5738], 00:24:52.711 | 99.99th=[ 5738] 00:24:52.711 bw ( KiB/s): min=22528, max=505856, per=6.71%, avg=258272.25, stdev=178970.89, samples=8 00:24:52.711 iops : min= 22, max= 494, avg=252.12, stdev=174.86, samples=8 00:24:52.711 lat (msec) : 100=0.17%, 250=1.33%, 500=65.30%, 750=6.59%, 1000=8.92% 00:24:52.711 lat (msec) : 2000=14.85%, >=2000=2.84% 00:24:52.711 cpu : usr=0.11%, sys=2.04%, ctx=1305, majf=0, minf=32769 00:24:52.711 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:24:52.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.711 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.711 issued rwts: total=1199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.711 job0: (groupid=0, jobs=1): err= 0: pid=3119794: Fri Nov 29 21:54:23 2024 00:24:52.711 read: IOPS=62, BW=62.2MiB/s (65.2MB/s)(626MiB/10067msec) 00:24:52.711 slat (usec): min=49, max=2089.9k, avg=15967.23, stdev=135799.33 00:24:52.711 clat (msec): min=65, max=4791, avg=1242.14, stdev=995.32 00:24:52.711 lat (msec): min=68, max=4797, avg=1258.11, stdev=1007.08 00:24:52.711 clat percentiles (msec): 00:24:52.711 | 1.00th=[ 114], 5.00th=[ 255], 10.00th=[ 439], 20.00th=[ 726], 00:24:52.711 | 30.00th=[ 751], 40.00th=[ 776], 50.00th=[ 852], 60.00th=[ 885], 00:24:52.711 | 70.00th=[ 902], 80.00th=[ 2802], 90.00th=[ 2869], 95.00th=[ 2903], 00:24:52.711 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:24:52.711 | 99.99th=[ 4799] 00:24:52.711 bw ( KiB/s): min=49152, max=177820, per=3.31%, avg=127318.25, stdev=50579.09, samples=8 00:24:52.711 iops : min= 48, max= 173, avg=124.12, stdev=49.36, samples=8 00:24:52.711 lat (msec) : 100=0.32%, 250=4.47%, 500=6.55%, 750=19.33%, 1000=46.81% 00:24:52.711 lat (msec) : >=2000=22.52% 00:24:52.711 cpu : usr=0.07%, sys=1.88%, ctx=547, majf=0, minf=32769 00:24:52.711 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:24:52.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.711 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:24:52.711 issued rwts: total=626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.711 job0: (groupid=0, jobs=1): err= 0: pid=3119795: Fri Nov 29 21:54:23 2024 00:24:52.711 read: IOPS=32, BW=32.5MiB/s (34.0MB/s)(328MiB/10107msec) 00:24:52.711 slat (usec): min=643, max=1743.6k, avg=30540.71, stdev=97946.58 00:24:52.711 clat (msec): min=87, max=4891, avg=2594.15, stdev=984.79 00:24:52.711 lat (msec): min=137, max=4982, avg=2624.69, stdev=993.93 00:24:52.711 clat percentiles (msec): 00:24:52.711 | 1.00th=[ 144], 5.00th=[ 384], 10.00th=[ 902], 20.00th=[ 1670], 00:24:52.711 | 30.00th=[ 2567], 40.00th=[ 2970], 50.00th=[ 3138], 60.00th=[ 3171], 00:24:52.711 | 70.00th=[ 3205], 80.00th=[ 3239], 90.00th=[ 3306], 95.00th=[ 3373], 00:24:52.711 | 99.00th=[ 3440], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:24:52.711 | 99.99th=[ 4866] 00:24:52.711 bw ( KiB/s): min= 4096, max=61440, per=0.97%, avg=37254.36, stdev=15422.94, samples=11 00:24:52.711 iops : min= 4, max= 60, avg=36.27, stdev=15.13, samples=11 00:24:52.711 lat (msec) : 100=0.30%, 250=2.74%, 500=3.05%, 750=2.44%, 1000=4.57% 00:24:52.711 lat (msec) : 2000=10.67%, >=2000=76.22% 00:24:52.711 cpu : usr=0.01%, sys=0.95%, ctx=1277, majf=0, minf=32769 00:24:52.711 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.9%, 32=9.8%, >=64=80.8% 00:24:52.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.711 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:24:52.711 issued rwts: total=328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.711 job0: (groupid=0, jobs=1): err= 0: pid=3119796: Fri Nov 29 21:54:23 2024 00:24:52.711 read: IOPS=27, BW=27.1MiB/s (28.4MB/s)(273MiB/10083msec) 00:24:52.711 slat (usec): min=403, max=1729.8k, avg=36662.71, stdev=106386.57 00:24:52.711 clat (msec): min=72, max=4842, avg=3197.07, stdev=1332.24 00:24:52.711 lat (msec): min=94, max=4932, avg=3233.74, stdev=1331.97 00:24:52.711 clat percentiles (msec): 00:24:52.711 | 1.00th=[ 101], 5.00th=[ 368], 10.00th=[ 869], 20.00th=[ 2106], 00:24:52.711 | 30.00th=[ 3004], 40.00th=[ 3138], 50.00th=[ 3339], 60.00th=[ 3708], 00:24:52.711 | 70.00th=[ 4144], 80.00th=[ 4530], 90.00th=[ 4665], 95.00th=[ 4732], 00:24:52.711 | 99.00th=[ 4799], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:24:52.711 | 99.99th=[ 4866] 00:24:52.711 bw ( KiB/s): min=10240, max=61440, per=0.70%, avg=27110.09, stdev=13653.42, samples=11 00:24:52.711 iops : min= 10, max= 60, avg=26.27, stdev=13.32, samples=11 00:24:52.711 lat (msec) : 100=0.73%, 250=2.93%, 500=2.56%, 750=2.56%, 1000=2.56% 00:24:52.711 lat (msec) : 2000=8.42%, >=2000=80.22% 00:24:52.711 cpu : usr=0.02%, sys=0.96%, ctx=1239, majf=0, minf=32769 00:24:52.711 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.9%, 32=11.7%, >=64=76.9% 00:24:52.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.711 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:24:52.711 issued rwts: total=273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.711 job0: (groupid=0, jobs=1): err= 0: pid=3119797: Fri Nov 29 21:54:23 2024 00:24:52.711 read: IOPS=48, BW=48.7MiB/s (51.0MB/s)(493MiB/10133msec) 00:24:52.711 slat (usec): min=655, max=1737.2k, avg=20370.60, stdev=80386.47 00:24:52.711 clat (msec): min=87, max=3485, avg=2053.98, stdev=912.46 00:24:52.711 lat (msec): min=138, max=3493, avg=2074.35, stdev=913.51 00:24:52.711 clat percentiles (msec): 00:24:52.711 | 1.00th=[ 271], 5.00th=[ 919], 10.00th=[ 1028], 20.00th=[ 1116], 00:24:52.711 | 30.00th=[ 1200], 40.00th=[ 1552], 50.00th=[ 1989], 60.00th=[ 2567], 00:24:52.711 | 70.00th=[ 2903], 80.00th=[ 3004], 90.00th=[ 3205], 95.00th=[ 3339], 00:24:52.711 | 99.00th=[ 3440], 99.50th=[ 3440], 99.90th=[ 3473], 99.95th=[ 3473], 00:24:52.711 | 99.99th=[ 3473] 00:24:52.711 bw ( KiB/s): min=24576, max=180224, per=1.29%, avg=49812.80, stdev=38478.25, samples=15 00:24:52.711 iops : min= 24, max= 176, avg=48.47, stdev=37.60, samples=15 00:24:52.711 lat (msec) : 100=0.20%, 250=0.61%, 500=1.62%, 750=1.01%, 1000=2.64% 00:24:52.711 lat (msec) : 2000=44.22%, >=2000=49.70% 00:24:52.711 cpu : usr=0.01%, sys=1.12%, ctx=1470, majf=0, minf=32769 00:24:52.711 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.2% 00:24:52.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.711 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:52.711 issued rwts: total=493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.711 job0: (groupid=0, jobs=1): err= 0: pid=3119798: Fri Nov 29 21:54:23 2024 00:24:52.711 read: IOPS=54, BW=54.5MiB/s (57.1MB/s)(549MiB/10076msec) 00:24:52.711 slat (usec): min=54, max=1748.7k, avg=18240.42, stdev=76779.47 00:24:52.711 clat (msec): min=59, max=4320, avg=1644.10, stdev=1016.25 00:24:52.711 lat (msec): min=131, max=4423, avg=1662.34, stdev=1027.80 00:24:52.711 clat percentiles (msec): 00:24:52.711 | 1.00th=[ 140], 5.00th=[ 241], 10.00th=[ 426], 20.00th=[ 743], 00:24:52.711 | 30.00th=[ 902], 40.00th=[ 1217], 50.00th=[ 1502], 60.00th=[ 1536], 00:24:52.712 | 70.00th=[ 2467], 80.00th=[ 2802], 90.00th=[ 3306], 95.00th=[ 3373], 00:24:52.712 | 99.00th=[ 3440], 99.50th=[ 3440], 99.90th=[ 4329], 99.95th=[ 4329], 00:24:52.712 | 99.99th=[ 4329] 00:24:52.712 bw ( KiB/s): min=10240, max=169984, per=1.86%, avg=71559.17, stdev=54522.75, samples=12 00:24:52.712 iops : min= 10, max= 166, avg=69.67, stdev=53.08, samples=12 00:24:52.712 lat (msec) : 100=0.18%, 250=5.65%, 500=5.46%, 750=9.84%, 1000=13.48% 00:24:52.712 lat (msec) : 2000=33.88%, >=2000=31.51% 00:24:52.712 cpu : usr=0.06%, sys=1.08%, ctx=1625, majf=0, minf=32769 00:24:52.712 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.8%, >=64=88.5% 00:24:52.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.712 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:24:52.712 issued rwts: total=549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.712 job0: (groupid=0, jobs=1): err= 0: pid=3119799: Fri Nov 29 21:54:23 2024 00:24:52.712 read: IOPS=39, BW=39.1MiB/s (41.0MB/s)(394MiB/10076msec) 00:24:52.712 slat (usec): min=437, max=2156.2k, avg=25387.48, stdev=138869.90 00:24:52.712 clat (msec): min=71, max=6476, avg=1684.69, stdev=991.32 00:24:52.712 lat (msec): min=95, max=6519, avg=1710.08, stdev=1018.15 00:24:52.712 clat percentiles (msec): 00:24:52.712 | 1.00th=[ 105], 5.00th=[ 600], 10.00th=[ 919], 20.00th=[ 1003], 00:24:52.712 | 30.00th=[ 1183], 40.00th=[ 1385], 50.00th=[ 1569], 60.00th=[ 1770], 00:24:52.712 | 70.00th=[ 1972], 80.00th=[ 2165], 90.00th=[ 2366], 95.00th=[ 2500], 00:24:52.712 | 99.00th=[ 6409], 99.50th=[ 6409], 99.90th=[ 6477], 99.95th=[ 6477], 00:24:52.712 | 99.99th=[ 6477] 00:24:52.712 bw ( KiB/s): min=14336, max=153600, per=1.77%, avg=68305.00, stdev=46837.56, samples=8 00:24:52.712 iops : min= 14, max= 150, avg=66.50, stdev=45.87, samples=8 00:24:52.712 lat (msec) : 100=0.51%, 250=1.52%, 500=2.03%, 750=2.79%, 1000=12.94% 00:24:52.712 lat (msec) : 2000=51.02%, >=2000=29.19% 00:24:52.712 cpu : usr=0.02%, sys=0.95%, ctx=1417, majf=0, minf=32769 00:24:52.712 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.0% 00:24:52.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.712 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:24:52.712 issued rwts: total=394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.712 job0: (groupid=0, jobs=1): err= 0: pid=3119800: Fri Nov 29 21:54:23 2024 00:24:52.712 read: IOPS=41, BW=41.5MiB/s (43.5MB/s)(419MiB/10101msec) 00:24:52.712 slat (usec): min=415, max=2091.2k, avg=23866.71, stdev=125099.94 00:24:52.712 clat (msec): min=98, max=5574, avg=2086.06, stdev=1103.00 00:24:52.712 lat (msec): min=105, max=5585, avg=2109.93, stdev=1110.63 00:24:52.712 clat percentiles (msec): 00:24:52.712 | 1.00th=[ 203], 5.00th=[ 701], 10.00th=[ 1301], 20.00th=[ 1586], 00:24:52.712 | 30.00th=[ 1636], 40.00th=[ 1653], 50.00th=[ 1703], 60.00th=[ 1888], 00:24:52.712 | 70.00th=[ 2123], 80.00th=[ 2467], 90.00th=[ 2802], 95.00th=[ 5201], 00:24:52.712 | 99.00th=[ 5269], 99.50th=[ 5470], 99.90th=[ 5604], 99.95th=[ 5604], 00:24:52.712 | 99.99th=[ 5604] 00:24:52.712 bw ( KiB/s): min=30720, max=98304, per=1.55%, avg=59687.40, stdev=22907.58, samples=10 00:24:52.712 iops : min= 30, max= 96, avg=58.10, stdev=22.49, samples=10 00:24:52.712 lat (msec) : 100=0.24%, 250=1.19%, 500=1.91%, 750=1.67%, 1000=2.15% 00:24:52.712 lat (msec) : 2000=56.80%, >=2000=36.04% 00:24:52.712 cpu : usr=0.03%, sys=1.13%, ctx=1299, majf=0, minf=32769 00:24:52.712 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:24:52.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.712 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:52.712 issued rwts: total=419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.712 job0: (groupid=0, jobs=1): err= 0: pid=3119801: Fri Nov 29 21:54:23 2024 00:24:52.712 read: IOPS=82, BW=82.5MiB/s (86.5MB/s)(838MiB/10159msec) 00:24:52.712 slat (usec): min=56, max=1755.7k, avg=11978.85, stdev=64463.73 00:24:52.712 clat (msec): min=116, max=3182, avg=1311.90, stdev=702.51 00:24:52.712 lat (msec): min=183, max=3183, avg=1323.88, stdev=704.44 00:24:52.712 clat percentiles (msec): 00:24:52.712 | 1.00th=[ 401], 5.00th=[ 776], 10.00th=[ 785], 20.00th=[ 844], 00:24:52.712 | 30.00th=[ 911], 40.00th=[ 1003], 50.00th=[ 1028], 60.00th=[ 1045], 00:24:52.712 | 70.00th=[ 1133], 80.00th=[ 1905], 90.00th=[ 2668], 95.00th=[ 2836], 00:24:52.712 | 99.00th=[ 3171], 99.50th=[ 3171], 99.90th=[ 3171], 99.95th=[ 3171], 00:24:52.712 | 99.99th=[ 3171] 00:24:52.712 bw ( KiB/s): min=20480, max=172032, per=2.52%, avg=96949.80, stdev=52308.73, samples=15 00:24:52.712 iops : min= 20, max= 168, avg=94.60, stdev=51.06, samples=15 00:24:52.712 lat (msec) : 250=0.60%, 500=0.95%, 750=0.84%, 1000=37.23%, 2000=41.17% 00:24:52.712 lat (msec) : >=2000=19.21% 00:24:52.712 cpu : usr=0.04%, sys=1.56%, ctx=1304, majf=0, minf=32769 00:24:52.712 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:24:52.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.712 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.712 issued rwts: total=838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.712 job0: (groupid=0, jobs=1): err= 0: pid=3119802: Fri Nov 29 21:54:23 2024 00:24:52.712 read: IOPS=55, BW=55.8MiB/s (58.5MB/s)(567MiB/10159msec) 00:24:52.712 slat (usec): min=57, max=1716.0k, avg=17726.30, stdev=74164.45 00:24:52.712 clat (msec): min=104, max=6535, avg=1773.76, stdev=811.86 00:24:52.712 lat (msec): min=172, max=6559, avg=1791.49, stdev=825.34 00:24:52.712 clat percentiles (msec): 00:24:52.712 | 1.00th=[ 268], 5.00th=[ 835], 10.00th=[ 1070], 20.00th=[ 1183], 00:24:52.712 | 30.00th=[ 1267], 40.00th=[ 1418], 50.00th=[ 1552], 60.00th=[ 1821], 00:24:52.712 | 70.00th=[ 2039], 80.00th=[ 2333], 90.00th=[ 2869], 95.00th=[ 3272], 00:24:52.712 | 99.00th=[ 4866], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544], 00:24:52.712 | 99.99th=[ 6544] 00:24:52.712 bw ( KiB/s): min=24576, max=165888, per=1.67%, avg=64164.50, stdev=34879.12, samples=14 00:24:52.712 iops : min= 24, max= 162, avg=62.57, stdev=34.09, samples=14 00:24:52.712 lat (msec) : 250=0.88%, 500=1.59%, 750=1.94%, 1000=1.76%, 2000=62.43% 00:24:52.712 lat (msec) : >=2000=31.39% 00:24:52.712 cpu : usr=0.02%, sys=1.33%, ctx=1702, majf=0, minf=32769 00:24:52.712 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=88.9% 00:24:52.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.712 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:24:52.712 issued rwts: total=567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.712 job1: (groupid=0, jobs=1): err= 0: pid=3119813: Fri Nov 29 21:54:23 2024 00:24:52.712 read: IOPS=3, BW=3157KiB/s (3233kB/s)(32.0MiB/10380msec) 00:24:52.712 slat (usec): min=439, max=2098.3k, avg=322929.04, stdev=737563.60 00:24:52.712 clat (msec): min=45, max=10292, avg=4985.93, stdev=2245.94 00:24:52.712 lat (msec): min=2088, max=10379, avg=5308.86, stdev=2255.56 00:24:52.712 clat percentiles (msec): 00:24:52.712 | 1.00th=[ 46], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 4212], 00:24:52.712 | 30.00th=[ 4245], 40.00th=[ 4279], 50.00th=[ 4279], 60.00th=[ 6342], 00:24:52.712 | 70.00th=[ 6409], 80.00th=[ 6409], 90.00th=[ 8557], 95.00th=[ 8557], 00:24:52.712 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:24:52.712 | 99.99th=[10268] 00:24:52.712 lat (msec) : 50=3.12%, >=2000=96.88% 00:24:52.712 cpu : usr=0.02%, sys=0.18%, ctx=44, majf=0, minf=8193 00:24:52.712 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:24:52.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.712 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:24:52.712 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.712 job1: (groupid=0, jobs=1): err= 0: pid=3119814: Fri Nov 29 21:54:23 2024 00:24:52.712 read: IOPS=7, BW=7292KiB/s (7467kB/s)(74.0MiB/10392msec) 00:24:52.712 slat (usec): min=408, max=2135.4k, avg=140130.22, stdev=463710.09 00:24:52.712 clat (msec): min=21, max=10386, avg=5086.09, stdev=2755.75 00:24:52.712 lat (msec): min=1744, max=10391, avg=5226.22, stdev=2758.35 00:24:52.712 clat percentiles (msec): 00:24:52.712 | 1.00th=[ 22], 5.00th=[ 1821], 10.00th=[ 1854], 20.00th=[ 1972], 00:24:52.712 | 30.00th=[ 3943], 40.00th=[ 4077], 50.00th=[ 4178], 60.00th=[ 6074], 00:24:52.712 | 70.00th=[ 6208], 80.00th=[ 8490], 90.00th=[ 8557], 95.00th=[10268], 00:24:52.712 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:24:52.712 | 99.99th=[10402] 00:24:52.712 lat (msec) : 50=1.35%, 2000=20.27%, >=2000=78.38% 00:24:52.712 cpu : usr=0.01%, sys=0.38%, ctx=165, majf=0, minf=18945 00:24:52.712 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.8%, 16=21.6%, 32=43.2%, >=64=14.9% 00:24:52.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.713 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:24:52.713 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.713 job1: (groupid=0, jobs=1): err= 0: pid=3119815: Fri Nov 29 21:54:23 2024 00:24:52.713 read: IOPS=4, BW=4467KiB/s (4574kB/s)(46.0MiB/10545msec) 00:24:52.713 slat (usec): min=1028, max=2127.9k, avg=228457.29, stdev=642002.46 00:24:52.713 clat (msec): min=35, max=10543, avg=9539.80, stdev=2472.79 00:24:52.713 lat (msec): min=2103, max=10544, avg=9768.26, stdev=2019.02 00:24:52.713 clat percentiles (msec): 00:24:52.713 | 1.00th=[ 36], 5.00th=[ 4212], 10.00th=[ 4329], 20.00th=[10402], 00:24:52.713 | 30.00th=[10402], 40.00th=[10537], 50.00th=[10537], 60.00th=[10537], 00:24:52.713 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:24:52.713 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:24:52.713 | 99.99th=[10537] 00:24:52.713 lat (msec) : 50=2.17%, >=2000=97.83% 00:24:52.713 cpu : usr=0.00%, sys=0.52%, ctx=97, majf=0, minf=11777 00:24:52.713 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:24:52.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.713 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:24:52.713 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.713 job1: (groupid=0, jobs=1): err= 0: pid=3119816: Fri Nov 29 21:54:23 2024 00:24:52.713 read: IOPS=4, BW=4370KiB/s (4475kB/s)(45.0MiB/10544msec) 00:24:52.713 slat (usec): min=859, max=2187.9k, avg=233418.35, stdev=650289.72 00:24:52.713 clat (msec): min=39, max=10541, avg=9665.14, stdev=2278.28 00:24:52.713 lat (msec): min=2103, max=10543, avg=9898.55, stdev=1745.47 00:24:52.713 clat percentiles (msec): 00:24:52.713 | 1.00th=[ 40], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[10402], 00:24:52.713 | 30.00th=[10402], 40.00th=[10537], 50.00th=[10537], 60.00th=[10537], 00:24:52.713 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:24:52.713 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:24:52.713 | 99.99th=[10537] 00:24:52.713 lat (msec) : 50=2.22%, >=2000=97.78% 00:24:52.713 cpu : usr=0.00%, sys=0.48%, ctx=94, majf=0, minf=11521 00:24:52.713 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:24:52.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.713 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:24:52.713 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.713 job1: (groupid=0, jobs=1): err= 0: pid=3119817: Fri Nov 29 21:54:23 2024 00:24:52.713 read: IOPS=44, BW=44.6MiB/s (46.7MB/s)(470MiB/10542msec) 00:24:52.713 slat (usec): min=51, max=2087.8k, avg=22347.78, stdev=155603.87 00:24:52.713 clat (msec): min=35, max=6356, avg=2011.60, stdev=1551.92 00:24:52.713 lat (msec): min=739, max=6364, avg=2033.95, stdev=1560.48 00:24:52.713 clat percentiles (msec): 00:24:52.713 | 1.00th=[ 743], 5.00th=[ 760], 10.00th=[ 793], 20.00th=[ 852], 00:24:52.713 | 30.00th=[ 860], 40.00th=[ 978], 50.00th=[ 1620], 60.00th=[ 2165], 00:24:52.713 | 70.00th=[ 2333], 80.00th=[ 2567], 90.00th=[ 4530], 95.00th=[ 6141], 00:24:52.713 | 99.00th=[ 6342], 99.50th=[ 6342], 99.90th=[ 6342], 99.95th=[ 6342], 00:24:52.713 | 99.99th=[ 6342] 00:24:52.713 bw ( KiB/s): min= 2023, max=165888, per=2.61%, avg=100304.00, stdev=69927.48, samples=7 00:24:52.713 iops : min= 1, max= 162, avg=97.71, stdev=68.43, samples=7 00:24:52.713 lat (msec) : 50=0.21%, 750=2.34%, 1000=38.09%, 2000=15.11%, >=2000=44.26% 00:24:52.713 cpu : usr=0.04%, sys=1.57%, ctx=843, majf=0, minf=32769 00:24:52.713 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.6% 00:24:52.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.713 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:52.713 issued rwts: total=470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.713 job1: (groupid=0, jobs=1): err= 0: pid=3119818: Fri Nov 29 21:54:23 2024 00:24:52.713 read: IOPS=70, BW=70.2MiB/s (73.7MB/s)(707MiB/10065msec) 00:24:52.713 slat (usec): min=50, max=2028.4k, avg=14202.56, stdev=104756.75 00:24:52.713 clat (msec): min=21, max=5083, avg=1055.66, stdev=595.24 00:24:52.713 lat (msec): min=65, max=6919, avg=1069.87, stdev=622.00 00:24:52.713 clat percentiles (msec): 00:24:52.713 | 1.00th=[ 93], 5.00th=[ 401], 10.00th=[ 609], 20.00th=[ 651], 00:24:52.713 | 30.00th=[ 701], 40.00th=[ 885], 50.00th=[ 1028], 60.00th=[ 1099], 00:24:52.713 | 70.00th=[ 1150], 80.00th=[ 1250], 90.00th=[ 1754], 95.00th=[ 1905], 00:24:52.713 | 99.00th=[ 3205], 99.50th=[ 5000], 99.90th=[ 5067], 99.95th=[ 5067], 00:24:52.713 | 99.99th=[ 5067] 00:24:52.713 bw ( KiB/s): min=51200, max=221184, per=3.17%, avg=121969.78, stdev=56673.67, samples=9 00:24:52.713 iops : min= 50, max= 216, avg=119.11, stdev=55.35, samples=9 00:24:52.713 lat (msec) : 50=0.14%, 100=1.41%, 250=1.70%, 500=2.83%, 750=29.84% 00:24:52.713 lat (msec) : 1000=7.92%, 2000=54.17%, >=2000=1.98% 00:24:52.713 cpu : usr=0.03%, sys=1.22%, ctx=846, majf=0, minf=32769 00:24:52.713 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:24:52.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.713 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:24:52.713 issued rwts: total=707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.713 job1: (groupid=0, jobs=1): err= 0: pid=3119819: Fri Nov 29 21:54:23 2024 00:24:52.713 read: IOPS=31, BW=31.9MiB/s (33.4MB/s)(332MiB/10413msec) 00:24:52.713 slat (usec): min=41, max=2092.7k, avg=30329.56, stdev=187091.98 00:24:52.713 clat (msec): min=341, max=8348, avg=1994.27, stdev=1565.46 00:24:52.713 lat (msec): min=438, max=8349, avg=2024.60, stdev=1607.42 00:24:52.713 clat percentiles (msec): 00:24:52.713 | 1.00th=[ 443], 5.00th=[ 460], 10.00th=[ 693], 20.00th=[ 953], 00:24:52.713 | 30.00th=[ 1011], 40.00th=[ 1011], 50.00th=[ 1083], 60.00th=[ 1318], 00:24:52.713 | 70.00th=[ 3239], 80.00th=[ 3675], 90.00th=[ 4010], 95.00th=[ 4245], 00:24:52.713 | 99.00th=[ 8221], 99.50th=[ 8356], 99.90th=[ 8356], 99.95th=[ 8356], 00:24:52.713 | 99.99th=[ 8356] 00:24:52.713 bw ( KiB/s): min=40960, max=125470, per=2.70%, avg=104071.50, stdev=42075.11, samples=4 00:24:52.713 iops : min= 40, max= 122, avg=101.50, stdev=41.00, samples=4 00:24:52.713 lat (msec) : 500=5.12%, 750=9.04%, 1000=12.95%, 2000=37.65%, >=2000=35.24% 00:24:52.713 cpu : usr=0.00%, sys=1.31%, ctx=529, majf=0, minf=32769 00:24:52.713 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.0% 00:24:52.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.713 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:24:52.713 issued rwts: total=332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.713 job1: (groupid=0, jobs=1): err= 0: pid=3119821: Fri Nov 29 21:54:23 2024 00:24:52.713 read: IOPS=2, BW=2930KiB/s (3000kB/s)(30.0MiB/10485msec) 00:24:52.713 slat (usec): min=848, max=2120.6k, avg=348171.83, stdev=768632.05 00:24:52.713 clat (msec): min=39, max=10475, avg=9184.07, stdev=2795.64 00:24:52.713 lat (msec): min=2103, max=10484, avg=9532.24, stdev=2205.70 00:24:52.713 clat percentiles (msec): 00:24:52.713 | 1.00th=[ 40], 5.00th=[ 2106], 10.00th=[ 4212], 20.00th=[ 8557], 00:24:52.713 | 30.00th=[10268], 40.00th=[10402], 50.00th=[10402], 60.00th=[10402], 00:24:52.713 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:24:52.713 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:24:52.713 | 99.99th=[10537] 00:24:52.713 lat (msec) : 50=3.33%, >=2000=96.67% 00:24:52.713 cpu : usr=0.00%, sys=0.30%, ctx=72, majf=0, minf=7681 00:24:52.713 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:24:52.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.713 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:24:52.713 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.713 job1: (groupid=0, jobs=1): err= 0: pid=3119822: Fri Nov 29 21:54:23 2024 00:24:52.713 read: IOPS=15, BW=15.2MiB/s (15.9MB/s)(159MiB/10475msec) 00:24:52.713 slat (usec): min=557, max=2121.3k, avg=62942.11, stdev=316771.18 00:24:52.713 clat (msec): min=466, max=9734, avg=2367.63, stdev=2896.28 00:24:52.713 lat (msec): min=480, max=9748, avg=2430.57, stdev=2952.87 00:24:52.713 clat percentiles (msec): 00:24:52.713 | 1.00th=[ 481], 5.00th=[ 502], 10.00th=[ 592], 20.00th=[ 785], 00:24:52.713 | 30.00th=[ 961], 40.00th=[ 1150], 50.00th=[ 1301], 60.00th=[ 1502], 00:24:52.713 | 70.00th=[ 1620], 80.00th=[ 1854], 90.00th=[ 9731], 95.00th=[ 9731], 00:24:52.713 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:24:52.713 | 99.99th=[ 9731] 00:24:52.714 bw ( KiB/s): min=65015, max=65015, per=1.69%, avg=65015.00, stdev= 0.00, samples=1 00:24:52.714 iops : min= 63, max= 63, avg=63.00, stdev= 0.00, samples=1 00:24:52.714 lat (msec) : 500=4.40%, 750=10.69%, 1000=18.87%, 2000=50.31%, >=2000=15.72% 00:24:52.714 cpu : usr=0.02%, sys=0.77%, ctx=305, majf=0, minf=32769 00:24:52.714 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=5.0%, 16=10.1%, 32=20.1%, >=64=60.4% 00:24:52.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.714 complete : 0=0.0%, 4=97.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.0% 00:24:52.714 issued rwts: total=159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.714 job1: (groupid=0, jobs=1): err= 0: pid=3119823: Fri Nov 29 21:54:23 2024 00:24:52.714 read: IOPS=3, BW=3436KiB/s (3519kB/s)(35.0MiB/10430msec) 00:24:52.714 slat (usec): min=923, max=2095.2k, avg=296676.49, stdev=711310.51 00:24:52.714 clat (msec): min=45, max=10427, avg=7731.02, stdev=3279.98 00:24:52.714 lat (msec): min=2081, max=10429, avg=8027.69, stdev=3024.00 00:24:52.714 clat percentiles (msec): 00:24:52.714 | 1.00th=[ 46], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 4245], 00:24:52.714 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10402], 00:24:52.714 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:24:52.714 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:24:52.714 | 99.99th=[10402] 00:24:52.714 lat (msec) : 50=2.86%, >=2000=97.14% 00:24:52.714 cpu : usr=0.02%, sys=0.29%, ctx=58, majf=0, minf=8961 00:24:52.714 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:24:52.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.714 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:24:52.714 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.714 job1: (groupid=0, jobs=1): err= 0: pid=3119824: Fri Nov 29 21:54:23 2024 00:24:52.714 read: IOPS=27, BW=28.0MiB/s (29.3MB/s)(291MiB/10406msec) 00:24:52.714 slat (usec): min=661, max=2096.2k, avg=35627.82, stdev=199555.19 00:24:52.714 clat (msec): min=35, max=6342, avg=2435.99, stdev=924.57 00:24:52.714 lat (msec): min=1411, max=6350, avg=2471.62, stdev=940.04 00:24:52.714 clat percentiles (msec): 00:24:52.714 | 1.00th=[ 1401], 5.00th=[ 1418], 10.00th=[ 1435], 20.00th=[ 1636], 00:24:52.714 | 30.00th=[ 1838], 40.00th=[ 2165], 50.00th=[ 2333], 60.00th=[ 2500], 00:24:52.714 | 70.00th=[ 2802], 80.00th=[ 3104], 90.00th=[ 3440], 95.00th=[ 3608], 00:24:52.714 | 99.00th=[ 6342], 99.50th=[ 6342], 99.90th=[ 6342], 99.95th=[ 6342], 00:24:52.714 | 99.99th=[ 6342] 00:24:52.714 bw ( KiB/s): min=26624, max=96256, per=1.73%, avg=66751.60, stdev=34120.42, samples=5 00:24:52.714 iops : min= 26, max= 94, avg=65.00, stdev=33.56, samples=5 00:24:52.714 lat (msec) : 50=0.34%, 2000=36.08%, >=2000=63.57% 00:24:52.714 cpu : usr=0.04%, sys=1.32%, ctx=779, majf=0, minf=32769 00:24:52.714 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.5%, 32=11.0%, >=64=78.4% 00:24:52.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.714 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:24:52.714 issued rwts: total=291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.714 job1: (groupid=0, jobs=1): err= 0: pid=3119825: Fri Nov 29 21:54:23 2024 00:24:52.714 read: IOPS=3, BW=3730KiB/s (3820kB/s)(38.0MiB/10432msec) 00:24:52.714 slat (usec): min=701, max=2129.3k, avg=273573.65, stdev=682905.66 00:24:52.714 clat (msec): min=35, max=10417, avg=8058.94, stdev=2541.95 00:24:52.714 lat (msec): min=2071, max=10431, avg=8332.52, stdev=2190.20 00:24:52.714 clat percentiles (msec): 00:24:52.714 | 1.00th=[ 36], 5.00th=[ 2072], 10.00th=[ 4212], 20.00th=[ 8423], 00:24:52.714 | 30.00th=[ 8557], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[ 8557], 00:24:52.714 | 70.00th=[ 8557], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:24:52.714 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:24:52.714 | 99.99th=[10402] 00:24:52.714 lat (msec) : 50=2.63%, >=2000=97.37% 00:24:52.714 cpu : usr=0.00%, sys=0.30%, ctx=78, majf=0, minf=9729 00:24:52.714 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:24:52.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.714 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:24:52.714 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.714 job1: (groupid=0, jobs=1): err= 0: pid=3119826: Fri Nov 29 21:54:23 2024 00:24:52.714 read: IOPS=32, BW=32.6MiB/s (34.2MB/s)(340MiB/10432msec) 00:24:52.714 slat (usec): min=74, max=2077.1k, avg=30673.48, stdev=209627.25 00:24:52.714 clat (usec): min=342, max=5045.1k, avg=2527727.10, stdev=1826094.71 00:24:52.714 lat (msec): min=897, max=5046, avg=2558.40, stdev=1824.94 00:24:52.714 clat percentiles (msec): 00:24:52.714 | 1.00th=[ 894], 5.00th=[ 894], 10.00th=[ 894], 20.00th=[ 902], 00:24:52.714 | 30.00th=[ 902], 40.00th=[ 919], 50.00th=[ 936], 60.00th=[ 4212], 00:24:52.714 | 70.00th=[ 4463], 80.00th=[ 4665], 90.00th=[ 4799], 95.00th=[ 4933], 00:24:52.714 | 99.00th=[ 5000], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:24:52.714 | 99.99th=[ 5067] 00:24:52.714 bw ( KiB/s): min= 2048, max=149504, per=1.88%, avg=72362.67, stdev=62558.49, samples=6 00:24:52.714 iops : min= 2, max= 146, avg=70.67, stdev=61.09, samples=6 00:24:52.714 lat (usec) : 500=0.29% 00:24:52.714 lat (msec) : 1000=54.12%, 2000=0.59%, >=2000=45.00% 00:24:52.714 cpu : usr=0.00%, sys=1.61%, ctx=353, majf=0, minf=32769 00:24:52.714 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.4%, >=64=81.5% 00:24:52.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.714 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:24:52.714 issued rwts: total=340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.714 job2: (groupid=0, jobs=1): err= 0: pid=3119836: Fri Nov 29 21:54:23 2024 00:24:52.714 read: IOPS=18, BW=18.5MiB/s (19.4MB/s)(192MiB/10377msec) 00:24:52.714 slat (usec): min=960, max=2042.2k, avg=53813.45, stdev=235250.00 00:24:52.714 clat (msec): min=43, max=7103, avg=4748.88, stdev=1596.12 00:24:52.714 lat (msec): min=2086, max=7130, avg=4802.69, stdev=1552.55 00:24:52.714 clat percentiles (msec): 00:24:52.714 | 1.00th=[ 2089], 5.00th=[ 2668], 10.00th=[ 2802], 20.00th=[ 2937], 00:24:52.714 | 30.00th=[ 3205], 40.00th=[ 4329], 50.00th=[ 4933], 60.00th=[ 5470], 00:24:52.714 | 70.00th=[ 6007], 80.00th=[ 6477], 90.00th=[ 6812], 95.00th=[ 6946], 00:24:52.714 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:24:52.714 | 99.99th=[ 7080] 00:24:52.714 bw ( KiB/s): min= 2048, max=32768, per=0.57%, avg=21845.33, stdev=13870.07, samples=6 00:24:52.714 iops : min= 2, max= 32, avg=21.33, stdev=13.54, samples=6 00:24:52.714 lat (msec) : 50=0.52%, >=2000=99.48% 00:24:52.714 cpu : usr=0.00%, sys=0.95%, ctx=648, majf=0, minf=32769 00:24:52.714 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.2%, 16=8.3%, 32=16.7%, >=64=67.2% 00:24:52.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.714 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.5% 00:24:52.714 issued rwts: total=192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.714 job2: (groupid=0, jobs=1): err= 0: pid=3119837: Fri Nov 29 21:54:23 2024 00:24:52.714 read: IOPS=16, BW=16.3MiB/s (17.1MB/s)(172MiB/10548msec) 00:24:52.714 slat (usec): min=507, max=2141.3k, avg=61098.40, stdev=303531.09 00:24:52.714 clat (msec): min=37, max=10353, avg=7334.22, stdev=3213.58 00:24:52.714 lat (msec): min=1705, max=10391, avg=7395.32, stdev=3171.82 00:24:52.714 clat percentiles (msec): 00:24:52.714 | 1.00th=[ 1703], 5.00th=[ 1871], 10.00th=[ 2039], 20.00th=[ 2198], 00:24:52.714 | 30.00th=[ 8221], 40.00th=[ 8423], 50.00th=[ 8792], 60.00th=[ 9060], 00:24:52.714 | 70.00th=[ 9597], 80.00th=[ 9731], 90.00th=[10000], 95.00th=[10268], 00:24:52.714 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:24:52.714 | 99.99th=[10402] 00:24:52.714 bw ( KiB/s): min= 2003, max=49053, per=0.40%, avg=15336.00, stdev=19944.46, samples=6 00:24:52.714 iops : min= 1, max= 47, avg=14.67, stdev=19.30, samples=6 00:24:52.714 lat (msec) : 50=0.58%, 2000=8.72%, >=2000=90.70% 00:24:52.714 cpu : usr=0.00%, sys=1.12%, ctx=502, majf=0, minf=32769 00:24:52.714 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.7%, 16=9.3%, 32=18.6%, >=64=63.4% 00:24:52.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.714 complete : 0=0.0%, 4=97.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.2% 00:24:52.714 issued rwts: total=172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.714 job2: (groupid=0, jobs=1): err= 0: pid=3119838: Fri Nov 29 21:54:23 2024 00:24:52.714 read: IOPS=164, BW=165MiB/s (173MB/s)(1724MiB/10457msec) 00:24:52.714 slat (usec): min=45, max=1641.2k, avg=6057.96, stdev=49660.11 00:24:52.715 clat (usec): min=1541, max=4133.5k, avg=747859.93, stdev=767679.56 00:24:52.715 lat (msec): min=255, max=4135, avg=753.92, stdev=772.47 00:24:52.715 clat percentiles (msec): 00:24:52.715 | 1.00th=[ 255], 5.00th=[ 257], 10.00th=[ 259], 20.00th=[ 264], 00:24:52.715 | 30.00th=[ 372], 40.00th=[ 397], 50.00th=[ 514], 60.00th=[ 518], 00:24:52.715 | 70.00th=[ 625], 80.00th=[ 902], 90.00th=[ 1519], 95.00th=[ 3004], 00:24:52.715 | 99.00th=[ 3809], 99.50th=[ 3943], 99.90th=[ 4111], 99.95th=[ 4144], 00:24:52.715 | 99.99th=[ 4144] 00:24:52.715 bw ( KiB/s): min=26624, max=501760, per=5.31%, avg=204259.87, stdev=147806.91, samples=16 00:24:52.715 iops : min= 26, max= 490, avg=199.44, stdev=144.34, samples=16 00:24:52.715 lat (msec) : 2=0.06%, 500=46.23%, 750=26.04%, 1000=12.70%, 2000=7.25% 00:24:52.715 lat (msec) : >=2000=7.71% 00:24:52.715 cpu : usr=0.14%, sys=2.86%, ctx=1654, majf=0, minf=32769 00:24:52.715 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:24:52.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.715 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.715 issued rwts: total=1724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.715 job2: (groupid=0, jobs=1): err= 0: pid=3119839: Fri Nov 29 21:54:23 2024 00:24:52.715 read: IOPS=67, BW=67.6MiB/s (70.9MB/s)(680MiB/10052msec) 00:24:52.715 slat (usec): min=44, max=1727.3k, avg=14705.17, stdev=67834.99 00:24:52.715 clat (msec): min=48, max=3283, avg=1373.21, stdev=824.69 00:24:52.715 lat (msec): min=73, max=3295, avg=1387.91, stdev=830.88 00:24:52.715 clat percentiles (msec): 00:24:52.715 | 1.00th=[ 91], 5.00th=[ 368], 10.00th=[ 380], 20.00th=[ 388], 00:24:52.715 | 30.00th=[ 592], 40.00th=[ 1020], 50.00th=[ 1469], 60.00th=[ 1770], 00:24:52.715 | 70.00th=[ 2005], 80.00th=[ 2106], 90.00th=[ 2567], 95.00th=[ 2635], 00:24:52.715 | 99.00th=[ 2735], 99.50th=[ 2735], 99.90th=[ 3272], 99.95th=[ 3272], 00:24:52.715 | 99.99th=[ 3272] 00:24:52.715 bw ( KiB/s): min=30720, max=284672, per=2.08%, avg=80029.54, stdev=68122.10, samples=13 00:24:52.715 iops : min= 30, max= 278, avg=78.15, stdev=66.53, samples=13 00:24:52.715 lat (msec) : 50=0.15%, 100=1.32%, 250=1.62%, 500=24.71%, 750=5.59% 00:24:52.715 lat (msec) : 1000=5.00%, 2000=31.32%, >=2000=30.29% 00:24:52.715 cpu : usr=0.04%, sys=1.46%, ctx=1441, majf=0, minf=32769 00:24:52.715 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:24:52.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.715 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:24:52.715 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.715 job2: (groupid=0, jobs=1): err= 0: pid=3119840: Fri Nov 29 21:54:23 2024 00:24:52.715 read: IOPS=44, BW=44.4MiB/s (46.6MB/s)(461MiB/10379msec) 00:24:52.715 slat (usec): min=525, max=1719.2k, avg=22504.32, stdev=89979.26 00:24:52.715 clat (usec): min=930, max=3902.5k, avg=2105812.92, stdev=524774.05 00:24:52.715 lat (msec): min=846, max=3908, avg=2128.32, stdev=525.46 00:24:52.715 clat percentiles (msec): 00:24:52.715 | 1.00th=[ 894], 5.00th=[ 1217], 10.00th=[ 1502], 20.00th=[ 1670], 00:24:52.715 | 30.00th=[ 1804], 40.00th=[ 1938], 50.00th=[ 2089], 60.00th=[ 2265], 00:24:52.715 | 70.00th=[ 2467], 80.00th=[ 2567], 90.00th=[ 2702], 95.00th=[ 2769], 00:24:52.715 | 99.00th=[ 3876], 99.50th=[ 3910], 99.90th=[ 3910], 99.95th=[ 3910], 00:24:52.715 | 99.99th=[ 3910] 00:24:52.715 bw ( KiB/s): min= 5976, max=102400, per=1.36%, avg=52425.31, stdev=25360.22, samples=13 00:24:52.715 iops : min= 5, max= 100, avg=51.00, stdev=24.79, samples=13 00:24:52.715 lat (usec) : 1000=0.22% 00:24:52.715 lat (msec) : 1000=1.52%, 2000=41.21%, >=2000=57.05% 00:24:52.715 cpu : usr=0.00%, sys=1.24%, ctx=1292, majf=0, minf=32769 00:24:52.715 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=6.9%, >=64=86.3% 00:24:52.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.715 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:52.715 issued rwts: total=461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.715 job2: (groupid=0, jobs=1): err= 0: pid=3119841: Fri Nov 29 21:54:23 2024 00:24:52.715 read: IOPS=40, BW=40.3MiB/s (42.3MB/s)(423MiB/10488msec) 00:24:52.715 slat (usec): min=60, max=2111.3k, avg=24756.47, stdev=154893.06 00:24:52.715 clat (msec): min=13, max=5649, avg=2946.22, stdev=1577.95 00:24:52.715 lat (msec): min=882, max=5649, avg=2970.97, stdev=1570.01 00:24:52.715 clat percentiles (msec): 00:24:52.715 | 1.00th=[ 885], 5.00th=[ 894], 10.00th=[ 1028], 20.00th=[ 1536], 00:24:52.715 | 30.00th=[ 1720], 40.00th=[ 2534], 50.00th=[ 2668], 60.00th=[ 2836], 00:24:52.715 | 70.00th=[ 3641], 80.00th=[ 5067], 90.00th=[ 5403], 95.00th=[ 5537], 00:24:52.715 | 99.00th=[ 5671], 99.50th=[ 5671], 99.90th=[ 5671], 99.95th=[ 5671], 00:24:52.715 | 99.99th=[ 5671] 00:24:52.715 bw ( KiB/s): min= 2052, max=155648, per=1.43%, avg=54934.82, stdev=54700.36, samples=11 00:24:52.715 iops : min= 2, max= 152, avg=53.64, stdev=53.42, samples=11 00:24:52.715 lat (msec) : 20=0.24%, 1000=8.98%, 2000=30.02%, >=2000=60.76% 00:24:52.715 cpu : usr=0.02%, sys=1.25%, ctx=784, majf=0, minf=32769 00:24:52.715 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.1% 00:24:52.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.715 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:52.715 issued rwts: total=423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.715 job2: (groupid=0, jobs=1): err= 0: pid=3119842: Fri Nov 29 21:54:23 2024 00:24:52.715 read: IOPS=16, BW=16.0MiB/s (16.8MB/s)(167MiB/10434msec) 00:24:52.715 slat (usec): min=60, max=2076.2k, avg=62391.52, stdev=294292.75 00:24:52.715 clat (msec): min=13, max=9577, avg=6909.88, stdev=2990.22 00:24:52.715 lat (msec): min=1592, max=9598, avg=6972.27, stdev=2950.75 00:24:52.715 clat percentiles (msec): 00:24:52.715 | 1.00th=[ 1586], 5.00th=[ 1754], 10.00th=[ 1871], 20.00th=[ 2106], 00:24:52.715 | 30.00th=[ 7953], 40.00th=[ 8154], 50.00th=[ 8288], 60.00th=[ 8658], 00:24:52.715 | 70.00th=[ 8926], 80.00th=[ 9060], 90.00th=[ 9329], 95.00th=[ 9463], 00:24:52.715 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:24:52.715 | 99.99th=[ 9597] 00:24:52.715 bw ( KiB/s): min= 2048, max=40960, per=0.69%, avg=26624.00, stdev=21381.75, samples=3 00:24:52.715 iops : min= 2, max= 40, avg=26.00, stdev=20.88, samples=3 00:24:52.715 lat (msec) : 20=0.60%, 2000=16.77%, >=2000=82.63% 00:24:52.715 cpu : usr=0.02%, sys=0.93%, ctx=459, majf=0, minf=32769 00:24:52.715 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.6%, 32=19.2%, >=64=62.3% 00:24:52.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.715 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:24:52.715 issued rwts: total=167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.715 job2: (groupid=0, jobs=1): err= 0: pid=3119843: Fri Nov 29 21:54:23 2024 00:24:52.715 read: IOPS=41, BW=41.4MiB/s (43.4MB/s)(419MiB/10112msec) 00:24:52.715 slat (usec): min=420, max=2141.1k, avg=23918.21, stdev=137233.23 00:24:52.715 clat (msec): min=87, max=6024, avg=2794.80, stdev=2005.83 00:24:52.715 lat (msec): min=115, max=6036, avg=2818.72, stdev=2014.40 00:24:52.715 clat percentiles (msec): 00:24:52.715 | 1.00th=[ 146], 5.00th=[ 334], 10.00th=[ 625], 20.00th=[ 1452], 00:24:52.715 | 30.00th=[ 1620], 40.00th=[ 1720], 50.00th=[ 1854], 60.00th=[ 2005], 00:24:52.715 | 70.00th=[ 3809], 80.00th=[ 5738], 90.00th=[ 5940], 95.00th=[ 6007], 00:24:52.715 | 99.00th=[ 6007], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:24:52.715 | 99.99th=[ 6007] 00:24:52.715 bw ( KiB/s): min=26570, max=110592, per=1.55%, avg=59582.80, stdev=23235.06, samples=10 00:24:52.715 iops : min= 25, max= 108, avg=58.00, stdev=22.92, samples=10 00:24:52.715 lat (msec) : 100=0.24%, 250=3.34%, 500=4.30%, 750=3.82%, 1000=3.58% 00:24:52.715 lat (msec) : 2000=45.11%, >=2000=39.62% 00:24:52.715 cpu : usr=0.01%, sys=1.19%, ctx=1140, majf=0, minf=32769 00:24:52.715 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:24:52.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.715 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:52.715 issued rwts: total=419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.715 job2: (groupid=0, jobs=1): err= 0: pid=3119844: Fri Nov 29 21:54:23 2024 00:24:52.715 read: IOPS=20, BW=20.9MiB/s (21.9MB/s)(217MiB/10399msec) 00:24:52.715 slat (usec): min=131, max=2114.1k, avg=47722.36, stdev=280870.86 00:24:52.715 clat (msec): min=41, max=9774, avg=5795.51, stdev=3946.55 00:24:52.715 lat (msec): min=987, max=9775, avg=5843.23, stdev=3932.75 00:24:52.715 clat percentiles (msec): 00:24:52.715 | 1.00th=[ 986], 5.00th=[ 995], 10.00th=[ 1011], 20.00th=[ 1045], 00:24:52.715 | 30.00th=[ 1116], 40.00th=[ 2106], 50.00th=[ 8658], 60.00th=[ 8926], 00:24:52.715 | 70.00th=[ 9194], 80.00th=[ 9329], 90.00th=[ 9597], 95.00th=[ 9597], 00:24:52.716 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:24:52.716 | 99.99th=[ 9731] 00:24:52.716 bw ( KiB/s): min= 2048, max=108544, per=0.95%, avg=36454.40, stdev=47973.06, samples=5 00:24:52.716 iops : min= 2, max= 106, avg=35.60, stdev=46.85, samples=5 00:24:52.716 lat (msec) : 50=0.46%, 1000=5.99%, 2000=32.72%, >=2000=60.83% 00:24:52.716 cpu : usr=0.02%, sys=1.36%, ctx=339, majf=0, minf=32769 00:24:52.716 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.7%, 16=7.4%, 32=14.7%, >=64=71.0% 00:24:52.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.716 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:24:52.716 issued rwts: total=217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.716 job2: (groupid=0, jobs=1): err= 0: pid=3119845: Fri Nov 29 21:54:23 2024 00:24:52.716 read: IOPS=201, BW=201MiB/s (211MB/s)(2024MiB/10049msec) 00:24:52.716 slat (usec): min=35, max=2098.6k, avg=4935.76, stdev=47019.38 00:24:52.716 clat (msec): min=47, max=2745, avg=614.30, stdev=545.57 00:24:52.716 lat (msec): min=51, max=4668, avg=619.24, stdev=551.63 00:24:52.716 clat percentiles (msec): 00:24:52.716 | 1.00th=[ 112], 5.00th=[ 380], 10.00th=[ 388], 20.00th=[ 393], 00:24:52.716 | 30.00th=[ 397], 40.00th=[ 405], 50.00th=[ 447], 60.00th=[ 527], 00:24:52.716 | 70.00th=[ 535], 80.00th=[ 659], 90.00th=[ 693], 95.00th=[ 2635], 00:24:52.716 | 99.00th=[ 2702], 99.50th=[ 2702], 99.90th=[ 2735], 99.95th=[ 2735], 00:24:52.716 | 99.99th=[ 2735] 00:24:52.716 bw ( KiB/s): min=36864, max=335872, per=6.31%, avg=242816.00, stdev=85054.79, samples=16 00:24:52.716 iops : min= 36, max= 328, avg=237.12, stdev=83.06, samples=16 00:24:52.716 lat (msec) : 50=0.05%, 100=0.74%, 250=2.42%, 500=52.96%, 750=37.40% 00:24:52.716 lat (msec) : 1000=0.10%, >=2000=6.32% 00:24:52.716 cpu : usr=0.13%, sys=3.00%, ctx=1829, majf=0, minf=32769 00:24:52.716 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:24:52.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.716 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.716 issued rwts: total=2024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.716 job2: (groupid=0, jobs=1): err= 0: pid=3119846: Fri Nov 29 21:54:23 2024 00:24:52.716 read: IOPS=18, BW=18.8MiB/s (19.7MB/s)(198MiB/10525msec) 00:24:52.716 slat (usec): min=58, max=2094.6k, avg=52941.93, stdev=240668.58 00:24:52.716 clat (msec): min=41, max=7690, avg=5179.18, stdev=1316.35 00:24:52.716 lat (msec): min=2081, max=7717, avg=5232.12, stdev=1269.66 00:24:52.716 clat percentiles (msec): 00:24:52.716 | 1.00th=[ 2089], 5.00th=[ 3507], 10.00th=[ 3608], 20.00th=[ 3977], 00:24:52.716 | 30.00th=[ 4396], 40.00th=[ 4597], 50.00th=[ 5000], 60.00th=[ 5537], 00:24:52.716 | 70.00th=[ 5873], 80.00th=[ 6409], 90.00th=[ 7148], 95.00th=[ 7550], 00:24:52.716 | 99.00th=[ 7684], 99.50th=[ 7684], 99.90th=[ 7684], 99.95th=[ 7684], 00:24:52.716 | 99.99th=[ 7684] 00:24:52.716 bw ( KiB/s): min= 4096, max=73728, per=0.75%, avg=28672.00, stdev=29143.55, samples=5 00:24:52.716 iops : min= 4, max= 72, avg=28.00, stdev=28.46, samples=5 00:24:52.716 lat (msec) : 50=0.51%, >=2000=99.49% 00:24:52.716 cpu : usr=0.00%, sys=1.12%, ctx=654, majf=0, minf=32769 00:24:52.716 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.1%, 32=16.2%, >=64=68.2% 00:24:52.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.716 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:24:52.716 issued rwts: total=198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.716 job2: (groupid=0, jobs=1): err= 0: pid=3119847: Fri Nov 29 21:54:23 2024 00:24:52.716 read: IOPS=8, BW=8622KiB/s (8829kB/s)(88.0MiB/10451msec) 00:24:52.716 slat (usec): min=899, max=2085.9k, avg=118282.98, stdev=433855.44 00:24:52.716 clat (msec): min=41, max=10449, avg=5977.66, stdev=2945.56 00:24:52.716 lat (msec): min=2075, max=10450, avg=6095.94, stdev=2913.28 00:24:52.716 clat percentiles (msec): 00:24:52.716 | 1.00th=[ 42], 5.00th=[ 3473], 10.00th=[ 3540], 20.00th=[ 3742], 00:24:52.716 | 30.00th=[ 3910], 40.00th=[ 4010], 50.00th=[ 4178], 60.00th=[ 4329], 00:24:52.716 | 70.00th=[ 8557], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:24:52.716 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:24:52.716 | 99.99th=[10402] 00:24:52.716 lat (msec) : 50=1.14%, >=2000=98.86% 00:24:52.716 cpu : usr=0.01%, sys=0.59%, ctx=233, majf=0, minf=22529 00:24:52.716 IO depths : 1=1.1%, 2=2.3%, 4=4.5%, 8=9.1%, 16=18.2%, 32=36.4%, >=64=28.4% 00:24:52.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.716 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:24:52.716 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.716 job2: (groupid=0, jobs=1): err= 0: pid=3119848: Fri Nov 29 21:54:23 2024 00:24:52.716 read: IOPS=81, BW=81.6MiB/s (85.6MB/s)(848MiB/10388msec) 00:24:52.716 slat (usec): min=42, max=2077.8k, avg=12192.23, stdev=114532.44 00:24:52.716 clat (msec): min=41, max=4721, avg=1047.37, stdev=873.24 00:24:52.716 lat (msec): min=499, max=4730, avg=1059.57, stdev=881.93 00:24:52.716 clat percentiles (msec): 00:24:52.716 | 1.00th=[ 502], 5.00th=[ 502], 10.00th=[ 506], 20.00th=[ 523], 00:24:52.716 | 30.00th=[ 542], 40.00th=[ 567], 50.00th=[ 768], 60.00th=[ 852], 00:24:52.716 | 70.00th=[ 885], 80.00th=[ 927], 90.00th=[ 2400], 95.00th=[ 2534], 00:24:52.716 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4732], 99.95th=[ 4732], 00:24:52.716 | 99.99th=[ 4732] 00:24:52.716 bw ( KiB/s): min=110592, max=256000, per=4.79%, avg=184320.00, stdev=56618.43, samples=8 00:24:52.716 iops : min= 108, max= 250, avg=180.00, stdev=55.29, samples=8 00:24:52.716 lat (msec) : 50=0.12%, 500=1.06%, 750=47.41%, 1000=32.90%, >=2000=18.51% 00:24:52.716 cpu : usr=0.06%, sys=2.03%, ctx=734, majf=0, minf=32769 00:24:52.716 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:24:52.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.716 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.716 issued rwts: total=848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.716 job3: (groupid=0, jobs=1): err= 0: pid=3119854: Fri Nov 29 21:54:23 2024 00:24:52.716 read: IOPS=7, BW=7893KiB/s (8082kB/s)(81.0MiB/10509msec) 00:24:52.716 slat (usec): min=914, max=2087.7k, avg=129080.90, stdev=481996.12 00:24:52.716 clat (msec): min=52, max=10505, avg=7110.40, stdev=3278.67 00:24:52.716 lat (msec): min=2073, max=10508, avg=7239.48, stdev=3202.26 00:24:52.716 clat percentiles (msec): 00:24:52.716 | 1.00th=[ 53], 5.00th=[ 2089], 10.00th=[ 2123], 20.00th=[ 4245], 00:24:52.716 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[ 8557], 00:24:52.716 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10537], 00:24:52.716 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:24:52.716 | 99.99th=[10537] 00:24:52.716 lat (msec) : 100=1.23%, >=2000=98.77% 00:24:52.716 cpu : usr=0.00%, sys=0.86%, ctx=85, majf=0, minf=20737 00:24:52.716 IO depths : 1=1.2%, 2=2.5%, 4=4.9%, 8=9.9%, 16=19.8%, 32=39.5%, >=64=22.2% 00:24:52.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.716 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:24:52.717 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.717 job3: (groupid=0, jobs=1): err= 0: pid=3119855: Fri Nov 29 21:54:23 2024 00:24:52.717 read: IOPS=44, BW=44.3MiB/s (46.4MB/s)(447MiB/10098msec) 00:24:52.717 slat (usec): min=40, max=2055.9k, avg=22432.20, stdev=159093.94 00:24:52.717 clat (msec): min=67, max=5861, avg=1504.67, stdev=1277.12 00:24:52.717 lat (msec): min=114, max=5875, avg=1527.11, stdev=1298.03 00:24:52.717 clat percentiles (msec): 00:24:52.717 | 1.00th=[ 124], 5.00th=[ 226], 10.00th=[ 347], 20.00th=[ 625], 00:24:52.717 | 30.00th=[ 735], 40.00th=[ 751], 50.00th=[ 768], 60.00th=[ 793], 00:24:52.717 | 70.00th=[ 2005], 80.00th=[ 3071], 90.00th=[ 3473], 95.00th=[ 3608], 00:24:52.717 | 99.00th=[ 4111], 99.50th=[ 5805], 99.90th=[ 5873], 99.95th=[ 5873], 00:24:52.717 | 99.99th=[ 5873] 00:24:52.717 bw ( KiB/s): min=28672, max=186368, per=2.81%, avg=108153.50, stdev=71136.48, samples=6 00:24:52.717 iops : min= 28, max= 182, avg=105.50, stdev=69.38, samples=6 00:24:52.717 lat (msec) : 100=0.22%, 250=5.82%, 500=8.95%, 750=24.61%, 1000=25.73% 00:24:52.717 lat (msec) : 2000=4.47%, >=2000=30.20% 00:24:52.717 cpu : usr=0.10%, sys=1.27%, ctx=598, majf=0, minf=32769 00:24:52.717 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.9% 00:24:52.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.717 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:52.717 issued rwts: total=447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.717 job3: (groupid=0, jobs=1): err= 0: pid=3119856: Fri Nov 29 21:54:23 2024 00:24:52.717 read: IOPS=38, BW=38.4MiB/s (40.3MB/s)(400MiB/10416msec) 00:24:52.717 slat (usec): min=36, max=2090.8k, avg=25902.89, stdev=180887.15 00:24:52.717 clat (msec): min=52, max=8935, avg=3182.64, stdev=3502.33 00:24:52.717 lat (msec): min=631, max=8938, avg=3208.55, stdev=3509.05 00:24:52.717 clat percentiles (msec): 00:24:52.717 | 1.00th=[ 634], 5.00th=[ 634], 10.00th=[ 634], 20.00th=[ 642], 00:24:52.717 | 30.00th=[ 651], 40.00th=[ 667], 50.00th=[ 676], 60.00th=[ 1167], 00:24:52.717 | 70.00th=[ 7013], 80.00th=[ 8221], 90.00th=[ 8792], 95.00th=[ 8792], 00:24:52.717 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:24:52.717 | 99.99th=[ 8926] 00:24:52.717 bw ( KiB/s): min= 2048, max=206848, per=1.45%, avg=55693.70, stdev=72419.41, samples=10 00:24:52.717 iops : min= 2, max= 202, avg=54.20, stdev=70.82, samples=10 00:24:52.717 lat (msec) : 100=0.25%, 750=55.00%, 1000=3.50%, 2000=5.75%, >=2000=35.50% 00:24:52.717 cpu : usr=0.03%, sys=1.32%, ctx=500, majf=0, minf=32769 00:24:52.717 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.2% 00:24:52.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.717 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:24:52.717 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.717 job3: (groupid=0, jobs=1): err= 0: pid=3119857: Fri Nov 29 21:54:23 2024 00:24:52.717 read: IOPS=91, BW=91.7MiB/s (96.2MB/s)(930MiB/10138msec) 00:24:52.717 slat (usec): min=34, max=2067.2k, avg=10814.68, stdev=88703.22 00:24:52.717 clat (msec): min=74, max=4806, avg=855.52, stdev=626.62 00:24:52.717 lat (msec): min=159, max=4811, avg=866.33, stdev=640.60 00:24:52.717 clat percentiles (msec): 00:24:52.717 | 1.00th=[ 288], 5.00th=[ 380], 10.00th=[ 384], 20.00th=[ 409], 00:24:52.717 | 30.00th=[ 527], 40.00th=[ 634], 50.00th=[ 709], 60.00th=[ 894], 00:24:52.717 | 70.00th=[ 902], 80.00th=[ 936], 90.00th=[ 1552], 95.00th=[ 1888], 00:24:52.717 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4799], 99.95th=[ 4799], 00:24:52.717 | 99.99th=[ 4799] 00:24:52.717 bw ( KiB/s): min=47104, max=339968, per=3.88%, avg=149317.82, stdev=92860.28, samples=11 00:24:52.717 iops : min= 46, max= 332, avg=145.82, stdev=90.68, samples=11 00:24:52.717 lat (msec) : 100=0.11%, 250=0.86%, 500=27.10%, 750=26.02%, 1000=27.74% 00:24:52.717 lat (msec) : 2000=16.02%, >=2000=2.15% 00:24:52.717 cpu : usr=0.03%, sys=2.04%, ctx=1009, majf=0, minf=32769 00:24:52.717 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.2% 00:24:52.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.717 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.717 issued rwts: total=930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.717 job3: (groupid=0, jobs=1): err= 0: pid=3119858: Fri Nov 29 21:54:23 2024 00:24:52.717 read: IOPS=14, BW=14.3MiB/s (15.0MB/s)(144MiB/10064msec) 00:24:52.717 slat (usec): min=119, max=4028.7k, avg=69448.81, stdev=401500.45 00:24:52.717 clat (msec): min=62, max=9935, avg=2436.08, stdev=2816.09 00:24:52.717 lat (msec): min=64, max=9949, avg=2505.53, stdev=2879.03 00:24:52.717 clat percentiles (msec): 00:24:52.717 | 1.00th=[ 65], 5.00th=[ 73], 10.00th=[ 199], 20.00th=[ 326], 00:24:52.717 | 30.00th=[ 659], 40.00th=[ 1020], 50.00th=[ 1401], 60.00th=[ 1552], 00:24:52.717 | 70.00th=[ 1636], 80.00th=[ 6007], 90.00th=[ 6141], 95.00th=[ 8154], 00:24:52.717 | 99.00th=[ 9866], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:24:52.717 | 99.99th=[10000] 00:24:52.717 lat (msec) : 100=8.33%, 250=2.78%, 500=14.58%, 750=6.25%, 1000=7.64% 00:24:52.717 lat (msec) : 2000=36.11%, >=2000=24.31% 00:24:52.717 cpu : usr=0.00%, sys=0.63%, ctx=315, majf=0, minf=32769 00:24:52.717 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.6%, 16=11.1%, 32=22.2%, >=64=56.2% 00:24:52.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.717 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.6% 00:24:52.717 issued rwts: total=144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.717 job3: (groupid=0, jobs=1): err= 0: pid=3119859: Fri Nov 29 21:54:23 2024 00:24:52.717 read: IOPS=27, BW=28.0MiB/s (29.3MB/s)(283MiB/10111msec) 00:24:52.717 slat (usec): min=36, max=2095.1k, avg=35387.00, stdev=200082.81 00:24:52.717 clat (msec): min=93, max=5363, avg=3020.05, stdev=1674.27 00:24:52.717 lat (msec): min=119, max=5397, avg=3055.44, stdev=1672.66 00:24:52.717 clat percentiles (msec): 00:24:52.717 | 1.00th=[ 122], 5.00th=[ 317], 10.00th=[ 684], 20.00th=[ 1485], 00:24:52.717 | 30.00th=[ 1620], 40.00th=[ 1737], 50.00th=[ 3910], 60.00th=[ 4178], 00:24:52.717 | 70.00th=[ 4329], 80.00th=[ 4463], 90.00th=[ 5134], 95.00th=[ 5201], 00:24:52.717 | 99.00th=[ 5336], 99.50th=[ 5336], 99.90th=[ 5336], 99.95th=[ 5336], 00:24:52.717 | 99.99th=[ 5336] 00:24:52.717 bw ( KiB/s): min=24625, max=77824, per=1.18%, avg=45355.57, stdev=20142.23, samples=7 00:24:52.717 iops : min= 24, max= 76, avg=44.29, stdev=19.68, samples=7 00:24:52.717 lat (msec) : 100=0.35%, 250=3.89%, 500=2.83%, 750=4.24%, 1000=3.18% 00:24:52.717 lat (msec) : 2000=29.33%, >=2000=56.18% 00:24:52.717 cpu : usr=0.01%, sys=1.50%, ctx=598, majf=0, minf=32769 00:24:52.717 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.7%, 32=11.3%, >=64=77.7% 00:24:52.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.717 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:24:52.717 issued rwts: total=283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.717 job3: (groupid=0, jobs=1): err= 0: pid=3119860: Fri Nov 29 21:54:23 2024 00:24:52.717 read: IOPS=24, BW=24.4MiB/s (25.6MB/s)(247MiB/10112msec) 00:24:52.717 slat (usec): min=82, max=4034.9k, avg=40553.28, stdev=289111.88 00:24:52.717 clat (msec): min=94, max=9823, avg=4370.53, stdev=2883.35 00:24:52.717 lat (msec): min=146, max=9899, avg=4411.08, stdev=2888.46 00:24:52.717 clat percentiles (msec): 00:24:52.717 | 1.00th=[ 182], 5.00th=[ 592], 10.00th=[ 634], 20.00th=[ 1083], 00:24:52.717 | 30.00th=[ 1972], 40.00th=[ 2534], 50.00th=[ 5873], 60.00th=[ 6477], 00:24:52.717 | 70.00th=[ 7080], 80.00th=[ 7416], 90.00th=[ 7550], 95.00th=[ 7617], 00:24:52.717 | 99.00th=[ 7617], 99.50th=[ 8154], 99.90th=[ 9866], 99.95th=[ 9866], 00:24:52.717 | 99.99th=[ 9866] 00:24:52.717 bw ( KiB/s): min=14336, max=112640, per=1.06%, avg=40605.00, stdev=36392.46, samples=6 00:24:52.717 iops : min= 14, max= 110, avg=39.50, stdev=35.54, samples=6 00:24:52.717 lat (msec) : 100=0.40%, 250=1.21%, 500=2.83%, 750=11.34%, 1000=3.24% 00:24:52.717 lat (msec) : 2000=11.74%, >=2000=69.23% 00:24:52.717 cpu : usr=0.01%, sys=1.32%, ctx=426, majf=0, minf=32769 00:24:52.717 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.5%, 32=13.0%, >=64=74.5% 00:24:52.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.717 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:24:52.717 issued rwts: total=247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.717 job3: (groupid=0, jobs=1): err= 0: pid=3119861: Fri Nov 29 21:54:23 2024 00:24:52.717 read: IOPS=10, BW=10.1MiB/s (10.5MB/s)(105MiB/10446msec) 00:24:52.717 slat (usec): min=508, max=2095.2k, avg=96024.10, stdev=380608.99 00:24:52.717 clat (msec): min=362, max=10442, avg=4227.10, stdev=4079.61 00:24:52.717 lat (msec): min=447, max=10445, avg=4323.12, stdev=4106.35 00:24:52.718 clat percentiles (msec): 00:24:52.718 | 1.00th=[ 447], 5.00th=[ 477], 10.00th=[ 575], 20.00th=[ 844], 00:24:52.718 | 30.00th=[ 1116], 40.00th=[ 1469], 50.00th=[ 1720], 60.00th=[ 2106], 00:24:52.718 | 70.00th=[ 6342], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:24:52.718 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:24:52.718 | 99.99th=[10402] 00:24:52.718 lat (msec) : 500=6.67%, 750=8.57%, 1000=12.38%, 2000=29.52%, >=2000=42.86% 00:24:52.718 cpu : usr=0.00%, sys=0.71%, ctx=297, majf=0, minf=26881 00:24:52.718 IO depths : 1=1.0%, 2=1.9%, 4=3.8%, 8=7.6%, 16=15.2%, 32=30.5%, >=64=40.0% 00:24:52.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.718 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:24:52.718 issued rwts: total=105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.718 job3: (groupid=0, jobs=1): err= 0: pid=3119862: Fri Nov 29 21:54:23 2024 00:24:52.718 read: IOPS=6, BW=7111KiB/s (7282kB/s)(73.0MiB/10512msec) 00:24:52.718 slat (usec): min=946, max=2085.0k, avg=143303.60, stdev=507740.08 00:24:52.718 clat (msec): min=50, max=10510, avg=7651.46, stdev=3395.81 00:24:52.718 lat (msec): min=2070, max=10511, avg=7794.76, stdev=3289.67 00:24:52.718 clat percentiles (msec): 00:24:52.718 | 1.00th=[ 51], 5.00th=[ 2072], 10.00th=[ 2106], 20.00th=[ 4212], 00:24:52.718 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10268], 60.00th=[10402], 00:24:52.718 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:24:52.718 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:24:52.718 | 99.99th=[10537] 00:24:52.718 lat (msec) : 100=1.37%, >=2000=98.63% 00:24:52.718 cpu : usr=0.00%, sys=0.82%, ctx=100, majf=0, minf=18689 00:24:52.718 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.0%, 16=21.9%, 32=43.8%, >=64=13.7% 00:24:52.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.718 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:24:52.718 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.718 job3: (groupid=0, jobs=1): err= 0: pid=3119863: Fri Nov 29 21:54:23 2024 00:24:52.718 read: IOPS=23, BW=23.2MiB/s (24.3MB/s)(235MiB/10126msec) 00:24:52.718 slat (usec): min=40, max=2090.4k, avg=42674.89, stdev=260778.17 00:24:52.718 clat (msec): min=96, max=9131, avg=1232.23, stdev=1664.79 00:24:52.718 lat (msec): min=191, max=9152, avg=1274.90, stdev=1742.37 00:24:52.718 clat percentiles (msec): 00:24:52.718 | 1.00th=[ 192], 5.00th=[ 209], 10.00th=[ 330], 20.00th=[ 472], 00:24:52.718 | 30.00th=[ 709], 40.00th=[ 852], 50.00th=[ 995], 60.00th=[ 1011], 00:24:52.718 | 70.00th=[ 1011], 80.00th=[ 1083], 90.00th=[ 1099], 95.00th=[ 7282], 00:24:52.718 | 99.00th=[ 9060], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:24:52.718 | 99.99th=[ 9194] 00:24:52.718 bw ( KiB/s): min=92160, max=126976, per=2.85%, avg=109568.00, stdev=24618.63, samples=2 00:24:52.718 iops : min= 90, max= 124, avg=107.00, stdev=24.04, samples=2 00:24:52.718 lat (msec) : 100=0.43%, 250=6.81%, 500=13.19%, 750=13.19%, 1000=18.72% 00:24:52.718 lat (msec) : 2000=40.85%, >=2000=6.81% 00:24:52.718 cpu : usr=0.00%, sys=1.25%, ctx=194, majf=0, minf=32769 00:24:52.718 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.6%, >=64=73.2% 00:24:52.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.718 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:24:52.718 issued rwts: total=235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.718 job3: (groupid=0, jobs=1): err= 0: pid=3119864: Fri Nov 29 21:54:23 2024 00:24:52.718 read: IOPS=6, BW=6374KiB/s (6527kB/s)(65.0MiB/10442msec) 00:24:52.718 slat (usec): min=822, max=4185.5k, avg=159829.66, stdev=629431.98 00:24:52.718 clat (msec): min=52, max=10429, avg=4997.51, stdev=4111.25 00:24:52.718 lat (msec): min=1332, max=10441, avg=5157.34, stdev=4117.93 00:24:52.718 clat percentiles (msec): 00:24:52.718 | 1.00th=[ 53], 5.00th=[ 1334], 10.00th=[ 1351], 20.00th=[ 1435], 00:24:52.718 | 30.00th=[ 1620], 40.00th=[ 1720], 50.00th=[ 2022], 60.00th=[ 6342], 00:24:52.718 | 70.00th=[ 8557], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:24:52.718 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:24:52.718 | 99.99th=[10402] 00:24:52.718 lat (msec) : 100=1.54%, 2000=43.08%, >=2000=55.38% 00:24:52.718 cpu : usr=0.01%, sys=0.42%, ctx=159, majf=0, minf=16641 00:24:52.718 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:24:52.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.718 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:24:52.718 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.718 job3: (groupid=0, jobs=1): err= 0: pid=3119866: Fri Nov 29 21:54:23 2024 00:24:52.718 read: IOPS=12, BW=12.9MiB/s (13.5MB/s)(130MiB/10095msec) 00:24:52.718 slat (usec): min=395, max=2184.9k, avg=76929.14, stdev=340889.02 00:24:52.718 clat (msec): min=93, max=10085, avg=4531.17, stdev=3819.63 00:24:52.718 lat (msec): min=97, max=10088, avg=4608.09, stdev=3830.26 00:24:52.718 clat percentiles (msec): 00:24:52.718 | 1.00th=[ 99], 5.00th=[ 186], 10.00th=[ 363], 20.00th=[ 718], 00:24:52.718 | 30.00th=[ 1116], 40.00th=[ 1720], 50.00th=[ 3708], 60.00th=[ 3809], 00:24:52.718 | 70.00th=[ 8154], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10134], 00:24:52.718 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:24:52.718 | 99.99th=[10134] 00:24:52.718 bw ( KiB/s): min= 6131, max= 6131, per=0.16%, avg=6131.00, stdev= 0.00, samples=1 00:24:52.718 iops : min= 5, max= 5, avg= 5.00, stdev= 0.00, samples=1 00:24:52.718 lat (msec) : 100=1.54%, 250=4.62%, 500=8.46%, 750=6.15%, 1000=5.38% 00:24:52.718 lat (msec) : 2000=14.62%, >=2000=59.23% 00:24:52.718 cpu : usr=0.00%, sys=0.82%, ctx=276, majf=0, minf=32769 00:24:52.718 IO depths : 1=0.8%, 2=1.5%, 4=3.1%, 8=6.2%, 16=12.3%, 32=24.6%, >=64=51.5% 00:24:52.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.718 complete : 0=0.0%, 4=75.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=25.0% 00:24:52.718 issued rwts: total=130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.718 job3: (groupid=0, jobs=1): err= 0: pid=3119867: Fri Nov 29 21:54:23 2024 00:24:52.718 read: IOPS=28, BW=29.0MiB/s (30.4MB/s)(303MiB/10451msec) 00:24:52.718 slat (usec): min=42, max=2108.0k, avg=34311.08, stdev=228764.09 00:24:52.718 clat (msec): min=52, max=5076, avg=2616.18, stdev=1849.69 00:24:52.718 lat (msec): min=891, max=5082, avg=2650.49, stdev=1848.42 00:24:52.718 clat percentiles (msec): 00:24:52.718 | 1.00th=[ 885], 5.00th=[ 894], 10.00th=[ 894], 20.00th=[ 902], 00:24:52.718 | 30.00th=[ 902], 40.00th=[ 911], 50.00th=[ 986], 60.00th=[ 4279], 00:24:52.718 | 70.00th=[ 4530], 80.00th=[ 4732], 90.00th=[ 4933], 95.00th=[ 5000], 00:24:52.718 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:24:52.718 | 99.99th=[ 5067] 00:24:52.718 bw ( KiB/s): min=12288, max=143360, per=2.33%, avg=89600.00, stdev=64120.72, samples=4 00:24:52.718 iops : min= 12, max= 140, avg=87.50, stdev=62.62, samples=4 00:24:52.718 lat (msec) : 100=0.33%, 1000=50.17%, 2000=1.65%, >=2000=47.85% 00:24:52.718 cpu : usr=0.00%, sys=0.94%, ctx=335, majf=0, minf=32769 00:24:52.718 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.6%, >=64=79.2% 00:24:52.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.718 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:24:52.718 issued rwts: total=303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.718 job4: (groupid=0, jobs=1): err= 0: pid=3119882: Fri Nov 29 21:54:23 2024 00:24:52.718 read: IOPS=41, BW=41.6MiB/s (43.6MB/s)(420MiB/10099msec) 00:24:52.718 slat (usec): min=38, max=2073.7k, avg=23805.41, stdev=156738.05 00:24:52.718 clat (msec): min=98, max=4822, avg=1870.93, stdev=1387.77 00:24:52.718 lat (msec): min=103, max=4837, avg=1894.74, stdev=1394.76 00:24:52.718 clat percentiles (msec): 00:24:52.718 | 1.00th=[ 188], 5.00th=[ 659], 10.00th=[ 852], 20.00th=[ 860], 00:24:52.718 | 30.00th=[ 885], 40.00th=[ 894], 50.00th=[ 902], 60.00th=[ 1183], 00:24:52.718 | 70.00th=[ 3037], 80.00th=[ 3742], 90.00th=[ 3977], 95.00th=[ 4010], 00:24:52.718 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:24:52.718 | 99.99th=[ 4799] 00:24:52.718 bw ( KiB/s): min=16384, max=153600, per=1.95%, avg=75008.00, stdev=63037.52, samples=8 00:24:52.718 iops : min= 16, max= 150, avg=73.25, stdev=61.56, samples=8 00:24:52.718 lat (msec) : 100=0.24%, 250=2.14%, 500=1.90%, 750=0.71%, 1000=53.10% 00:24:52.718 lat (msec) : 2000=7.14%, >=2000=34.76% 00:24:52.718 cpu : usr=0.02%, sys=1.33%, ctx=567, majf=0, minf=32769 00:24:52.718 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:24:52.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.718 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:52.719 issued rwts: total=420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.719 job4: (groupid=0, jobs=1): err= 0: pid=3119883: Fri Nov 29 21:54:23 2024 00:24:52.719 read: IOPS=37, BW=37.5MiB/s (39.3MB/s)(379MiB/10100msec) 00:24:52.719 slat (usec): min=70, max=2088.5k, avg=26465.74, stdev=176818.77 00:24:52.719 clat (msec): min=67, max=6866, avg=1640.47, stdev=1337.90 00:24:52.719 lat (msec): min=177, max=6867, avg=1666.94, stdev=1361.13 00:24:52.719 clat percentiles (msec): 00:24:52.719 | 1.00th=[ 226], 5.00th=[ 860], 10.00th=[ 885], 20.00th=[ 911], 00:24:52.719 | 30.00th=[ 927], 40.00th=[ 936], 50.00th=[ 961], 60.00th=[ 1485], 00:24:52.719 | 70.00th=[ 1888], 80.00th=[ 2123], 90.00th=[ 2366], 95.00th=[ 5067], 00:24:52.719 | 99.00th=[ 6812], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:24:52.719 | 99.99th=[ 6879] 00:24:52.719 bw ( KiB/s): min=18035, max=139264, per=1.91%, avg=73396.29, stdev=52003.37, samples=7 00:24:52.719 iops : min= 17, max= 136, avg=71.57, stdev=50.90, samples=7 00:24:52.719 lat (msec) : 100=0.26%, 250=0.79%, 500=1.06%, 750=1.58%, 1000=46.70% 00:24:52.719 lat (msec) : 2000=24.80%, >=2000=24.80% 00:24:52.719 cpu : usr=0.02%, sys=1.11%, ctx=884, majf=0, minf=32769 00:24:52.719 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.4% 00:24:52.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.719 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:24:52.719 issued rwts: total=379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.719 job4: (groupid=0, jobs=1): err= 0: pid=3119884: Fri Nov 29 21:54:23 2024 00:24:52.719 read: IOPS=20, BW=20.7MiB/s (21.7MB/s)(210MiB/10162msec) 00:24:52.719 slat (usec): min=138, max=2153.4k, avg=47829.73, stdev=241601.53 00:24:52.719 clat (msec): min=116, max=8949, avg=4586.13, stdev=2692.55 00:24:52.719 lat (msec): min=238, max=8951, avg=4633.95, stdev=2693.60 00:24:52.719 clat percentiles (msec): 00:24:52.719 | 1.00th=[ 253], 5.00th=[ 869], 10.00th=[ 978], 20.00th=[ 1250], 00:24:52.719 | 30.00th=[ 1687], 40.00th=[ 4866], 50.00th=[ 5000], 60.00th=[ 5269], 00:24:52.719 | 70.00th=[ 5403], 80.00th=[ 5604], 90.00th=[ 8792], 95.00th=[ 8792], 00:24:52.719 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:24:52.719 | 99.99th=[ 8926] 00:24:52.719 bw ( KiB/s): min= 2048, max=57344, per=0.62%, avg=23990.86, stdev=21632.49, samples=7 00:24:52.719 iops : min= 2, max= 56, avg=23.43, stdev=21.13, samples=7 00:24:52.719 lat (msec) : 250=0.95%, 500=1.43%, 750=1.90%, 1000=6.67%, 2000=19.52% 00:24:52.719 lat (msec) : >=2000=69.52% 00:24:52.719 cpu : usr=0.03%, sys=1.12%, ctx=494, majf=0, minf=32331 00:24:52.719 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.8%, 16=7.6%, 32=15.2%, >=64=70.0% 00:24:52.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.719 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:24:52.719 issued rwts: total=210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.719 job4: (groupid=0, jobs=1): err= 0: pid=3119886: Fri Nov 29 21:54:23 2024 00:24:52.719 read: IOPS=131, BW=132MiB/s (138MB/s)(1338MiB/10151msec) 00:24:52.719 slat (usec): min=43, max=1325.4k, avg=7505.91, stdev=39935.34 00:24:52.719 clat (msec): min=102, max=3533, avg=935.30, stdev=775.22 00:24:52.719 lat (msec): min=174, max=3535, avg=942.81, stdev=778.38 00:24:52.719 clat percentiles (msec): 00:24:52.719 | 1.00th=[ 368], 5.00th=[ 380], 10.00th=[ 384], 20.00th=[ 393], 00:24:52.719 | 30.00th=[ 414], 40.00th=[ 659], 50.00th=[ 776], 60.00th=[ 835], 00:24:52.719 | 70.00th=[ 986], 80.00th=[ 1036], 90.00th=[ 1552], 95.00th=[ 3171], 00:24:52.719 | 99.00th=[ 3440], 99.50th=[ 3507], 99.90th=[ 3540], 99.95th=[ 3540], 00:24:52.719 | 99.99th=[ 3540] 00:24:52.719 bw ( KiB/s): min= 8192, max=321536, per=3.58%, avg=137632.44, stdev=94749.93, samples=18 00:24:52.719 iops : min= 8, max= 314, avg=134.28, stdev=92.57, samples=18 00:24:52.719 lat (msec) : 250=0.37%, 500=33.18%, 750=11.73%, 1000=25.26%, 2000=19.81% 00:24:52.719 lat (msec) : >=2000=9.64% 00:24:52.719 cpu : usr=0.05%, sys=2.09%, ctx=1404, majf=0, minf=32770 00:24:52.719 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:24:52.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.719 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.719 issued rwts: total=1338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.719 job4: (groupid=0, jobs=1): err= 0: pid=3119887: Fri Nov 29 21:54:23 2024 00:24:52.719 read: IOPS=91, BW=91.8MiB/s (96.3MB/s)(920MiB/10021msec) 00:24:52.719 slat (usec): min=44, max=2089.1k, avg=10865.58, stdev=113833.13 00:24:52.719 clat (msec): min=18, max=5004, avg=522.25, stdev=383.34 00:24:52.719 lat (msec): min=20, max=6846, avg=533.11, stdev=436.81 00:24:52.719 clat percentiles (msec): 00:24:52.719 | 1.00th=[ 44], 5.00th=[ 114], 10.00th=[ 245], 20.00th=[ 393], 00:24:52.719 | 30.00th=[ 397], 40.00th=[ 405], 50.00th=[ 409], 60.00th=[ 542], 00:24:52.719 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 726], 95.00th=[ 760], 00:24:52.719 | 99.00th=[ 2869], 99.50th=[ 2903], 99.90th=[ 5000], 99.95th=[ 5000], 00:24:52.719 | 99.99th=[ 5000] 00:24:52.719 bw ( KiB/s): min=65536, max=323584, per=5.49%, avg=211285.33, stdev=97991.32, samples=6 00:24:52.719 iops : min= 64, max= 316, avg=206.33, stdev=95.69, samples=6 00:24:52.719 lat (msec) : 20=0.11%, 50=1.52%, 100=2.28%, 250=6.30%, 500=48.91% 00:24:52.719 lat (msec) : 750=35.33%, 1000=4.46%, >=2000=1.09% 00:24:52.719 cpu : usr=0.03%, sys=1.58%, ctx=1151, majf=0, minf=32769 00:24:52.719 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:24:52.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.719 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.719 issued rwts: total=920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.719 job4: (groupid=0, jobs=1): err= 0: pid=3119888: Fri Nov 29 21:54:23 2024 00:24:52.719 read: IOPS=73, BW=73.9MiB/s (77.5MB/s)(747MiB/10106msec) 00:24:52.719 slat (usec): min=52, max=2064.5k, avg=13396.45, stdev=89027.59 00:24:52.719 clat (msec): min=93, max=5980, avg=1642.11, stdev=1297.55 00:24:52.719 lat (msec): min=112, max=6045, avg=1655.51, stdev=1302.54 00:24:52.719 clat percentiles (msec): 00:24:52.719 | 1.00th=[ 178], 5.00th=[ 634], 10.00th=[ 634], 20.00th=[ 642], 00:24:52.719 | 30.00th=[ 651], 40.00th=[ 693], 50.00th=[ 718], 60.00th=[ 1469], 00:24:52.719 | 70.00th=[ 2467], 80.00th=[ 3138], 90.00th=[ 3809], 95.00th=[ 4010], 00:24:52.719 | 99.00th=[ 4111], 99.50th=[ 4144], 99.90th=[ 6007], 99.95th=[ 6007], 00:24:52.719 | 99.99th=[ 6007] 00:24:52.719 bw ( KiB/s): min= 4096, max=206848, per=2.20%, avg=84623.67, stdev=63968.83, samples=15 00:24:52.719 iops : min= 4, max= 202, avg=82.60, stdev=62.39, samples=15 00:24:52.719 lat (msec) : 100=0.13%, 250=1.61%, 500=1.34%, 750=51.00%, 1000=3.08% 00:24:52.719 lat (msec) : 2000=9.50%, >=2000=33.33% 00:24:52.719 cpu : usr=0.04%, sys=1.92%, ctx=951, majf=0, minf=32769 00:24:52.719 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:24:52.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.719 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:24:52.719 issued rwts: total=747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.719 job4: (groupid=0, jobs=1): err= 0: pid=3119889: Fri Nov 29 21:54:23 2024 00:24:52.719 read: IOPS=21, BW=21.5MiB/s (22.5MB/s)(218MiB/10156msec) 00:24:52.719 slat (usec): min=99, max=2115.5k, avg=46044.52, stdev=241479.59 00:24:52.719 clat (msec): min=116, max=8802, avg=5416.64, stdev=3478.05 00:24:52.719 lat (msec): min=172, max=8812, avg=5462.68, stdev=3467.85 00:24:52.719 clat percentiles (msec): 00:24:52.719 | 1.00th=[ 194], 5.00th=[ 550], 10.00th=[ 844], 20.00th=[ 1435], 00:24:52.719 | 30.00th=[ 1687], 40.00th=[ 3708], 50.00th=[ 8154], 60.00th=[ 8288], 00:24:52.719 | 70.00th=[ 8423], 80.00th=[ 8490], 90.00th=[ 8658], 95.00th=[ 8792], 00:24:52.719 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:24:52.719 | 99.99th=[ 8792] 00:24:52.719 bw ( KiB/s): min= 2048, max=65405, per=0.68%, avg=26309.71, stdev=21747.61, samples=7 00:24:52.719 iops : min= 2, max= 63, avg=25.43, stdev=21.10, samples=7 00:24:52.719 lat (msec) : 250=1.83%, 500=2.75%, 750=3.67%, 1000=4.59%, 2000=26.15% 00:24:52.719 lat (msec) : >=2000=61.01% 00:24:52.719 cpu : usr=0.03%, sys=1.48%, ctx=668, majf=0, minf=32769 00:24:52.719 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.7%, 16=7.3%, 32=14.7%, >=64=71.1% 00:24:52.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.719 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:24:52.720 issued rwts: total=218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.720 job4: (groupid=0, jobs=1): err= 0: pid=3119890: Fri Nov 29 21:54:23 2024 00:24:52.720 read: IOPS=28, BW=28.3MiB/s (29.7MB/s)(287MiB/10125msec) 00:24:52.720 slat (usec): min=43, max=2091.7k, avg=34841.82, stdev=232185.87 00:24:52.720 clat (msec): min=123, max=8943, avg=2916.74, stdev=3374.47 00:24:52.720 lat (msec): min=129, max=8951, avg=2951.58, stdev=3389.92 00:24:52.720 clat percentiles (msec): 00:24:52.720 | 1.00th=[ 131], 5.00th=[ 249], 10.00th=[ 372], 20.00th=[ 625], 00:24:52.720 | 30.00th=[ 776], 40.00th=[ 1011], 50.00th=[ 1011], 60.00th=[ 1020], 00:24:52.720 | 70.00th=[ 3071], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[ 8926], 00:24:52.720 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:24:52.720 | 99.99th=[ 8926] 00:24:52.720 bw ( KiB/s): min=86016, max=126722, per=2.84%, avg=109142.00, stdev=20912.03, samples=3 00:24:52.720 iops : min= 84, max= 123, avg=106.33, stdev=20.11, samples=3 00:24:52.720 lat (msec) : 250=5.23%, 500=10.10%, 750=10.45%, 1000=10.45%, 2000=33.10% 00:24:52.720 lat (msec) : >=2000=30.66% 00:24:52.720 cpu : usr=0.02%, sys=1.44%, ctx=286, majf=0, minf=32769 00:24:52.720 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.1%, >=64=78.0% 00:24:52.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.720 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:24:52.720 issued rwts: total=287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.720 job4: (groupid=0, jobs=1): err= 0: pid=3119891: Fri Nov 29 21:54:23 2024 00:24:52.720 read: IOPS=8, BW=8513KiB/s (8717kB/s)(84.0MiB/10104msec) 00:24:52.720 slat (usec): min=1498, max=2153.3k, avg=119112.26, stdev=416813.24 00:24:52.720 clat (msec): min=97, max=10086, avg=2633.98, stdev=3111.13 00:24:52.720 lat (msec): min=109, max=10103, avg=2753.09, stdev=3203.04 00:24:52.720 clat percentiles (msec): 00:24:52.720 | 1.00th=[ 99], 5.00th=[ 123], 10.00th=[ 309], 20.00th=[ 409], 00:24:52.720 | 30.00th=[ 684], 40.00th=[ 1036], 50.00th=[ 1250], 60.00th=[ 1368], 00:24:52.720 | 70.00th=[ 1703], 80.00th=[ 5940], 90.00th=[ 8221], 95.00th=[10000], 00:24:52.720 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:24:52.720 | 99.99th=[10134] 00:24:52.720 lat (msec) : 100=1.19%, 250=7.14%, 500=13.10%, 750=10.71%, 1000=5.95% 00:24:52.720 lat (msec) : 2000=34.52%, >=2000=27.38% 00:24:52.720 cpu : usr=0.00%, sys=0.46%, ctx=345, majf=0, minf=21505 00:24:52.720 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.5%, 16=19.0%, 32=38.1%, >=64=25.0% 00:24:52.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.720 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:24:52.720 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.720 job4: (groupid=0, jobs=1): err= 0: pid=3119892: Fri Nov 29 21:54:23 2024 00:24:52.720 read: IOPS=9, BW=9566KiB/s (9796kB/s)(94.0MiB/10062msec) 00:24:52.720 slat (usec): min=647, max=2069.4k, avg=106641.91, stdev=383180.73 00:24:52.720 clat (msec): min=36, max=9950, avg=2484.72, stdev=2934.26 00:24:52.720 lat (msec): min=68, max=10061, avg=2591.36, stdev=3025.10 00:24:52.720 clat percentiles (msec): 00:24:52.720 | 1.00th=[ 37], 5.00th=[ 95], 10.00th=[ 102], 20.00th=[ 224], 00:24:52.720 | 30.00th=[ 464], 40.00th=[ 810], 50.00th=[ 1150], 60.00th=[ 1552], 00:24:52.720 | 70.00th=[ 1821], 80.00th=[ 5940], 90.00th=[ 8087], 95.00th=[ 8221], 00:24:52.720 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:24:52.720 | 99.99th=[10000] 00:24:52.720 lat (msec) : 50=1.06%, 100=7.45%, 250=11.70%, 500=11.70%, 750=7.45% 00:24:52.720 lat (msec) : 1000=6.38%, 2000=25.53%, >=2000=28.72% 00:24:52.720 cpu : usr=0.00%, sys=0.54%, ctx=332, majf=0, minf=24065 00:24:52.720 IO depths : 1=1.1%, 2=2.1%, 4=4.3%, 8=8.5%, 16=17.0%, 32=34.0%, >=64=33.0% 00:24:52.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.720 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:24:52.720 issued rwts: total=94,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.720 job4: (groupid=0, jobs=1): err= 0: pid=3119893: Fri Nov 29 21:54:23 2024 00:24:52.720 read: IOPS=14, BW=14.2MiB/s (14.8MB/s)(143MiB/10099msec) 00:24:52.720 slat (usec): min=416, max=2073.0k, avg=70189.19, stdev=295568.96 00:24:52.720 clat (msec): min=61, max=9832, avg=4751.27, stdev=3514.11 00:24:52.720 lat (msec): min=134, max=9841, avg=4821.46, stdev=3519.36 00:24:52.720 clat percentiles (msec): 00:24:52.720 | 1.00th=[ 134], 5.00th=[ 243], 10.00th=[ 451], 20.00th=[ 844], 00:24:52.720 | 30.00th=[ 1070], 40.00th=[ 1653], 50.00th=[ 6141], 60.00th=[ 7617], 00:24:52.720 | 70.00th=[ 7752], 80.00th=[ 7953], 90.00th=[ 8020], 95.00th=[ 9597], 00:24:52.720 | 99.00th=[ 9731], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:24:52.720 | 99.99th=[ 9866] 00:24:52.720 bw ( KiB/s): min=30355, max=30355, per=0.79%, avg=30355.00, stdev= 0.00, samples=1 00:24:52.720 iops : min= 29, max= 29, avg=29.00, stdev= 0.00, samples=1 00:24:52.720 lat (msec) : 100=0.70%, 250=4.90%, 500=4.90%, 750=6.99%, 1000=10.49% 00:24:52.720 lat (msec) : 2000=15.38%, >=2000=56.64% 00:24:52.720 cpu : usr=0.00%, sys=0.78%, ctx=474, majf=0, minf=32769 00:24:52.720 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.6%, 16=11.2%, 32=22.4%, >=64=55.9% 00:24:52.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.720 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.9% 00:24:52.720 issued rwts: total=143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.720 job4: (groupid=0, jobs=1): err= 0: pid=3119894: Fri Nov 29 21:54:23 2024 00:24:52.720 read: IOPS=22, BW=22.4MiB/s (23.5MB/s)(226MiB/10086msec) 00:24:52.720 slat (usec): min=103, max=2105.0k, avg=44243.01, stdev=235991.65 00:24:52.720 clat (msec): min=85, max=8581, avg=5059.92, stdev=3383.00 00:24:52.720 lat (msec): min=97, max=8587, avg=5104.17, stdev=3376.99 00:24:52.720 clat percentiles (msec): 00:24:52.720 | 1.00th=[ 101], 5.00th=[ 313], 10.00th=[ 584], 20.00th=[ 1318], 00:24:52.720 | 30.00th=[ 1502], 40.00th=[ 3641], 50.00th=[ 7684], 60.00th=[ 7819], 00:24:52.720 | 70.00th=[ 7953], 80.00th=[ 8288], 90.00th=[ 8423], 95.00th=[ 8490], 00:24:52.720 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:24:52.720 | 99.99th=[ 8557] 00:24:52.720 bw ( KiB/s): min= 4096, max=77824, per=0.65%, avg=25156.00, stdev=24965.35, samples=8 00:24:52.720 iops : min= 4, max= 76, avg=24.38, stdev=24.31, samples=8 00:24:52.720 lat (msec) : 100=0.88%, 250=3.10%, 500=3.98%, 750=3.98%, 1000=3.10% 00:24:52.720 lat (msec) : 2000=24.34%, >=2000=60.62% 00:24:52.720 cpu : usr=0.00%, sys=1.46%, ctx=650, majf=0, minf=32769 00:24:52.720 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.5%, 16=7.1%, 32=14.2%, >=64=72.1% 00:24:52.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.720 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:24:52.720 issued rwts: total=226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.720 job4: (groupid=0, jobs=1): err= 0: pid=3119895: Fri Nov 29 21:54:23 2024 00:24:52.720 read: IOPS=52, BW=52.1MiB/s (54.6MB/s)(528MiB/10135msec) 00:24:52.720 slat (usec): min=45, max=2049.8k, avg=19065.36, stdev=128351.75 00:24:52.720 clat (msec): min=65, max=6224, avg=2293.85, stdev=2035.98 00:24:52.720 lat (msec): min=167, max=6228, avg=2312.92, stdev=2040.14 00:24:52.720 clat percentiles (msec): 00:24:52.720 | 1.00th=[ 215], 5.00th=[ 397], 10.00th=[ 676], 20.00th=[ 793], 00:24:52.720 | 30.00th=[ 885], 40.00th=[ 1036], 50.00th=[ 1620], 60.00th=[ 1838], 00:24:52.720 | 70.00th=[ 1905], 80.00th=[ 5537], 90.00th=[ 6007], 95.00th=[ 6007], 00:24:52.720 | 99.00th=[ 6208], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:24:52.720 | 99.99th=[ 6208] 00:24:52.720 bw ( KiB/s): min= 6144, max=161469, per=1.77%, avg=68219.25, stdev=56438.82, samples=12 00:24:52.720 iops : min= 6, max= 157, avg=66.50, stdev=54.95, samples=12 00:24:52.720 lat (msec) : 100=0.19%, 250=1.14%, 500=6.63%, 750=3.22%, 1000=26.89% 00:24:52.720 lat (msec) : 2000=37.31%, >=2000=24.62% 00:24:52.720 cpu : usr=0.02%, sys=1.34%, ctx=865, majf=0, minf=32769 00:24:52.720 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.1% 00:24:52.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.720 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:24:52.720 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.720 job5: (groupid=0, jobs=1): err= 0: pid=3119904: Fri Nov 29 21:54:23 2024 00:24:52.720 read: IOPS=205, BW=205MiB/s (215MB/s)(2144MiB/10442msec) 00:24:52.720 slat (usec): min=40, max=131520, avg=4660.90, stdev=8880.83 00:24:52.721 clat (msec): min=261, max=2246, avg=562.08, stdev=468.92 00:24:52.721 lat (msec): min=262, max=2260, avg=566.75, stdev=472.11 00:24:52.721 clat percentiles (msec): 00:24:52.721 | 1.00th=[ 264], 5.00th=[ 266], 10.00th=[ 266], 20.00th=[ 268], 00:24:52.721 | 30.00th=[ 271], 40.00th=[ 372], 50.00th=[ 397], 60.00th=[ 418], 00:24:52.721 | 70.00th=[ 464], 80.00th=[ 735], 90.00th=[ 1284], 95.00th=[ 1787], 00:24:52.721 | 99.00th=[ 2198], 99.50th=[ 2232], 99.90th=[ 2232], 99.95th=[ 2232], 00:24:52.721 | 99.99th=[ 2232] 00:24:52.721 bw ( KiB/s): min=47104, max=493568, per=6.71%, avg=258093.37, stdev=156038.09, samples=16 00:24:52.721 iops : min= 46, max= 482, avg=251.94, stdev=152.45, samples=16 00:24:52.721 lat (msec) : 500=71.97%, 750=9.47%, 1000=5.46%, 2000=9.79%, >=2000=3.31% 00:24:52.721 cpu : usr=0.06%, sys=2.34%, ctx=2746, majf=0, minf=32769 00:24:52.721 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:24:52.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.721 issued rwts: total=2144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.721 job5: (groupid=0, jobs=1): err= 0: pid=3119905: Fri Nov 29 21:54:23 2024 00:24:52.721 read: IOPS=42, BW=42.5MiB/s (44.6MB/s)(430MiB/10116msec) 00:24:52.721 slat (usec): min=59, max=2056.1k, avg=23338.13, stdev=137057.81 00:24:52.721 clat (msec): min=76, max=5966, avg=2784.72, stdev=1908.63 00:24:52.721 lat (msec): min=130, max=5980, avg=2808.05, stdev=1914.04 00:24:52.721 clat percentiles (msec): 00:24:52.721 | 1.00th=[ 201], 5.00th=[ 584], 10.00th=[ 1167], 20.00th=[ 1318], 00:24:52.721 | 30.00th=[ 1536], 40.00th=[ 1653], 50.00th=[ 1787], 60.00th=[ 2140], 00:24:52.721 | 70.00th=[ 3876], 80.00th=[ 5604], 90.00th=[ 5805], 95.00th=[ 5873], 00:24:52.721 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:24:52.721 | 99.99th=[ 5940] 00:24:52.721 bw ( KiB/s): min=22528, max=75624, per=1.34%, avg=51500.25, stdev=14909.75, samples=12 00:24:52.721 iops : min= 22, max= 73, avg=50.00, stdev=14.36, samples=12 00:24:52.721 lat (msec) : 100=0.23%, 250=1.40%, 500=2.33%, 750=2.56%, 1000=2.33% 00:24:52.721 lat (msec) : 2000=43.49%, >=2000=47.67% 00:24:52.721 cpu : usr=0.02%, sys=1.54%, ctx=1086, majf=0, minf=32769 00:24:52.721 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.4%, >=64=85.3% 00:24:52.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.721 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:52.721 issued rwts: total=430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.721 job5: (groupid=0, jobs=1): err= 0: pid=3119906: Fri Nov 29 21:54:23 2024 00:24:52.721 read: IOPS=56, BW=56.8MiB/s (59.6MB/s)(579MiB/10189msec) 00:24:52.721 slat (usec): min=107, max=2050.4k, avg=17393.03, stdev=86898.21 00:24:52.721 clat (msec): min=114, max=4416, avg=2110.68, stdev=1049.51 00:24:52.721 lat (msec): min=189, max=4420, avg=2128.08, stdev=1049.34 00:24:52.721 clat percentiles (msec): 00:24:52.721 | 1.00th=[ 213], 5.00th=[ 902], 10.00th=[ 1234], 20.00th=[ 1301], 00:24:52.721 | 30.00th=[ 1401], 40.00th=[ 1620], 50.00th=[ 1770], 60.00th=[ 2056], 00:24:52.721 | 70.00th=[ 2198], 80.00th=[ 3373], 90.00th=[ 4010], 95.00th=[ 4212], 00:24:52.721 | 99.00th=[ 4396], 99.50th=[ 4396], 99.90th=[ 4396], 99.95th=[ 4396], 00:24:52.721 | 99.99th=[ 4396] 00:24:52.721 bw ( KiB/s): min=10240, max=130810, per=1.60%, avg=61566.40, stdev=28314.19, samples=15 00:24:52.721 iops : min= 10, max= 127, avg=60.07, stdev=27.52, samples=15 00:24:52.721 lat (msec) : 250=1.04%, 500=1.55%, 750=1.73%, 1000=1.04%, 2000=51.81% 00:24:52.721 lat (msec) : >=2000=42.83% 00:24:52.721 cpu : usr=0.07%, sys=1.41%, ctx=1716, majf=0, minf=32769 00:24:52.721 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:24:52.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.721 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:24:52.721 issued rwts: total=579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.721 job5: (groupid=0, jobs=1): err= 0: pid=3119907: Fri Nov 29 21:54:23 2024 00:24:52.721 read: IOPS=51, BW=51.8MiB/s (54.3MB/s)(523MiB/10101msec) 00:24:52.721 slat (usec): min=42, max=3823.5k, avg=19125.25, stdev=182277.24 00:24:52.721 clat (msec): min=95, max=6541, avg=2057.32, stdev=2331.89 00:24:52.721 lat (msec): min=108, max=6549, avg=2076.44, stdev=2338.86 00:24:52.721 clat percentiles (msec): 00:24:52.721 | 1.00th=[ 268], 5.00th=[ 384], 10.00th=[ 384], 20.00th=[ 384], 00:24:52.721 | 30.00th=[ 414], 40.00th=[ 518], 50.00th=[ 802], 60.00th=[ 1133], 00:24:52.721 | 70.00th=[ 1787], 80.00th=[ 5805], 90.00th=[ 6208], 95.00th=[ 6342], 00:24:52.721 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544], 00:24:52.721 | 99.99th=[ 6544] 00:24:52.721 bw ( KiB/s): min=20480, max=243712, per=2.34%, avg=90066.56, stdev=79844.65, samples=9 00:24:52.721 iops : min= 20, max= 238, avg=87.89, stdev=77.87, samples=9 00:24:52.721 lat (msec) : 100=0.19%, 250=0.76%, 500=38.05%, 750=9.37%, 1000=8.22% 00:24:52.721 lat (msec) : 2000=13.96%, >=2000=29.45% 00:24:52.721 cpu : usr=0.03%, sys=1.25%, ctx=990, majf=0, minf=32769 00:24:52.721 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=88.0% 00:24:52.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.721 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:52.721 issued rwts: total=523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.721 job5: (groupid=0, jobs=1): err= 0: pid=3119908: Fri Nov 29 21:54:23 2024 00:24:52.721 read: IOPS=45, BW=45.2MiB/s (47.4MB/s)(457MiB/10103msec) 00:24:52.721 slat (usec): min=36, max=2006.7k, avg=21877.89, stdev=122167.62 00:24:52.721 clat (msec): min=101, max=4014, avg=2297.69, stdev=1115.39 00:24:52.721 lat (msec): min=116, max=4027, avg=2319.57, stdev=1118.44 00:24:52.721 clat percentiles (msec): 00:24:52.721 | 1.00th=[ 161], 5.00th=[ 443], 10.00th=[ 776], 20.00th=[ 1368], 00:24:52.721 | 30.00th=[ 1653], 40.00th=[ 1989], 50.00th=[ 2072], 60.00th=[ 2198], 00:24:52.721 | 70.00th=[ 3406], 80.00th=[ 3641], 90.00th=[ 3809], 95.00th=[ 3876], 00:24:52.721 | 99.00th=[ 4010], 99.50th=[ 4010], 99.90th=[ 4010], 99.95th=[ 4010], 00:24:52.721 | 99.99th=[ 4010] 00:24:52.721 bw ( KiB/s): min=10240, max=100352, per=1.60%, avg=61426.18, stdev=28073.04, samples=11 00:24:52.721 iops : min= 10, max= 98, avg=59.91, stdev=27.37, samples=11 00:24:52.721 lat (msec) : 250=1.97%, 500=4.16%, 750=3.50%, 1000=2.19%, 2000=30.63% 00:24:52.721 lat (msec) : >=2000=57.55% 00:24:52.721 cpu : usr=0.01%, sys=1.15%, ctx=1261, majf=0, minf=32769 00:24:52.721 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.0%, >=64=86.2% 00:24:52.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.721 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:52.721 issued rwts: total=457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.721 job5: (groupid=0, jobs=1): err= 0: pid=3119909: Fri Nov 29 21:54:23 2024 00:24:52.721 read: IOPS=110, BW=110MiB/s (116MB/s)(1111MiB/10086msec) 00:24:52.721 slat (usec): min=45, max=2170.3k, avg=9007.32, stdev=65882.17 00:24:52.721 clat (msec): min=71, max=6031, avg=1081.09, stdev=1181.26 00:24:52.721 lat (msec): min=91, max=6035, avg=1090.10, stdev=1187.03 00:24:52.721 clat percentiles (msec): 00:24:52.721 | 1.00th=[ 230], 5.00th=[ 255], 10.00th=[ 259], 20.00th=[ 317], 00:24:52.721 | 30.00th=[ 380], 40.00th=[ 409], 50.00th=[ 634], 60.00th=[ 844], 00:24:52.721 | 70.00th=[ 1167], 80.00th=[ 1351], 90.00th=[ 3775], 95.00th=[ 4212], 00:24:52.722 | 99.00th=[ 4396], 99.50th=[ 4463], 99.90th=[ 6007], 99.95th=[ 6007], 00:24:52.722 | 99.99th=[ 6007] 00:24:52.722 bw ( KiB/s): min=12288, max=438272, per=3.49%, avg=134189.87, stdev=131469.12, samples=15 00:24:52.722 iops : min= 12, max= 428, avg=130.93, stdev=128.44, samples=15 00:24:52.722 lat (msec) : 100=0.45%, 250=0.81%, 500=41.49%, 750=13.41%, 1000=8.28% 00:24:52.722 lat (msec) : 2000=24.12%, >=2000=11.43% 00:24:52.722 cpu : usr=0.08%, sys=2.35%, ctx=2098, majf=0, minf=32769 00:24:52.722 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:24:52.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.722 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.722 issued rwts: total=1111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.722 job5: (groupid=0, jobs=1): err= 0: pid=3119910: Fri Nov 29 21:54:23 2024 00:24:52.722 read: IOPS=49, BW=49.0MiB/s (51.4MB/s)(495MiB/10097msec) 00:24:52.722 slat (usec): min=34, max=2031.3k, avg=20278.89, stdev=92852.32 00:24:52.722 clat (msec): min=56, max=4299, avg=2288.15, stdev=1163.20 00:24:52.722 lat (msec): min=100, max=4301, avg=2308.43, stdev=1162.30 00:24:52.722 clat percentiles (msec): 00:24:52.722 | 1.00th=[ 114], 5.00th=[ 651], 10.00th=[ 1099], 20.00th=[ 1150], 00:24:52.722 | 30.00th=[ 1200], 40.00th=[ 1703], 50.00th=[ 2299], 60.00th=[ 2635], 00:24:52.722 | 70.00th=[ 2937], 80.00th=[ 3608], 90.00th=[ 4044], 95.00th=[ 4178], 00:24:52.722 | 99.00th=[ 4212], 99.50th=[ 4245], 99.90th=[ 4329], 99.95th=[ 4329], 00:24:52.722 | 99.99th=[ 4329] 00:24:52.722 bw ( KiB/s): min=10240, max=126976, per=1.50%, avg=57748.77, stdev=38530.93, samples=13 00:24:52.722 iops : min= 10, max= 124, avg=56.38, stdev=37.63, samples=13 00:24:52.722 lat (msec) : 100=0.20%, 250=1.82%, 500=1.62%, 750=2.22%, 1000=2.63% 00:24:52.722 lat (msec) : 2000=34.95%, >=2000=56.57% 00:24:52.722 cpu : usr=0.01%, sys=1.44%, ctx=1369, majf=0, minf=32769 00:24:52.722 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.3% 00:24:52.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.722 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:52.722 issued rwts: total=495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.722 job5: (groupid=0, jobs=1): err= 0: pid=3119911: Fri Nov 29 21:54:23 2024 00:24:52.722 read: IOPS=40, BW=40.6MiB/s (42.6MB/s)(412MiB/10148msec) 00:24:52.722 slat (usec): min=771, max=2168.8k, avg=24379.65, stdev=149723.36 00:24:52.722 clat (msec): min=100, max=7290, avg=2982.37, stdev=2223.66 00:24:52.722 lat (msec): min=149, max=7295, avg=3006.75, stdev=2231.86 00:24:52.722 clat percentiles (msec): 00:24:52.722 | 1.00th=[ 169], 5.00th=[ 523], 10.00th=[ 919], 20.00th=[ 1011], 00:24:52.722 | 30.00th=[ 1028], 40.00th=[ 1099], 50.00th=[ 1770], 60.00th=[ 3876], 00:24:52.722 | 70.00th=[ 4463], 80.00th=[ 5067], 90.00th=[ 6946], 95.00th=[ 7148], 00:24:52.722 | 99.00th=[ 7282], 99.50th=[ 7282], 99.90th=[ 7282], 99.95th=[ 7282], 00:24:52.722 | 99.99th=[ 7282] 00:24:52.722 bw ( KiB/s): min= 4096, max=126976, per=1.26%, avg=48473.00, stdev=36445.42, samples=12 00:24:52.722 iops : min= 4, max= 124, avg=47.25, stdev=35.63, samples=12 00:24:52.722 lat (msec) : 250=1.94%, 500=2.91%, 750=3.16%, 1000=4.85%, 2000=37.38% 00:24:52.722 lat (msec) : >=2000=49.76% 00:24:52.722 cpu : usr=0.05%, sys=1.63%, ctx=1115, majf=0, minf=32769 00:24:52.722 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.8%, >=64=84.7% 00:24:52.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.722 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:24:52.722 issued rwts: total=412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.722 job5: (groupid=0, jobs=1): err= 0: pid=3119913: Fri Nov 29 21:54:23 2024 00:24:52.722 read: IOPS=111, BW=112MiB/s (117MB/s)(1129MiB/10104msec) 00:24:52.722 slat (usec): min=41, max=114604, avg=8853.55, stdev=20516.77 00:24:52.722 clat (msec): min=102, max=2413, avg=1021.66, stdev=485.76 00:24:52.722 lat (msec): min=105, max=2418, avg=1030.51, stdev=489.21 00:24:52.722 clat percentiles (msec): 00:24:52.722 | 1.00th=[ 138], 5.00th=[ 414], 10.00th=[ 617], 20.00th=[ 667], 00:24:52.722 | 30.00th=[ 760], 40.00th=[ 835], 50.00th=[ 902], 60.00th=[ 995], 00:24:52.722 | 70.00th=[ 1028], 80.00th=[ 1133], 90.00th=[ 1921], 95.00th=[ 2072], 00:24:52.722 | 99.00th=[ 2333], 99.50th=[ 2366], 99.90th=[ 2400], 99.95th=[ 2400], 00:24:52.722 | 99.99th=[ 2400] 00:24:52.722 bw ( KiB/s): min=10240, max=219136, per=3.14%, avg=120684.06, stdev=55490.69, samples=17 00:24:52.722 iops : min= 10, max= 214, avg=117.76, stdev=54.20, samples=17 00:24:52.722 lat (msec) : 250=2.48%, 500=3.37%, 750=23.83%, 1000=30.56%, 2000=31.62% 00:24:52.722 lat (msec) : >=2000=8.15% 00:24:52.722 cpu : usr=0.00%, sys=1.72%, ctx=1595, majf=0, minf=32769 00:24:52.722 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:24:52.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.722 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.722 issued rwts: total=1129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.722 job5: (groupid=0, jobs=1): err= 0: pid=3119914: Fri Nov 29 21:54:23 2024 00:24:52.722 read: IOPS=92, BW=92.5MiB/s (97.0MB/s)(940MiB/10162msec) 00:24:52.722 slat (usec): min=48, max=2004.9k, avg=10680.36, stdev=67015.07 00:24:52.722 clat (msec): min=115, max=4624, avg=1333.22, stdev=1300.39 00:24:52.722 lat (msec): min=164, max=4626, avg=1343.90, stdev=1306.47 00:24:52.722 clat percentiles (msec): 00:24:52.722 | 1.00th=[ 355], 5.00th=[ 384], 10.00th=[ 388], 20.00th=[ 393], 00:24:52.722 | 30.00th=[ 397], 40.00th=[ 405], 50.00th=[ 527], 60.00th=[ 1083], 00:24:52.722 | 70.00th=[ 1670], 80.00th=[ 2265], 90.00th=[ 3910], 95.00th=[ 4279], 00:24:52.722 | 99.00th=[ 4530], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:24:52.722 | 99.99th=[ 4597] 00:24:52.722 bw ( KiB/s): min= 8192, max=335201, per=2.70%, avg=103860.31, stdev=105080.04, samples=16 00:24:52.722 iops : min= 8, max= 327, avg=101.31, stdev=102.56, samples=16 00:24:52.722 lat (msec) : 250=0.43%, 500=48.30%, 750=7.66%, 1000=2.55%, 2000=15.53% 00:24:52.722 lat (msec) : >=2000=25.53% 00:24:52.722 cpu : usr=0.08%, sys=1.92%, ctx=1746, majf=0, minf=32769 00:24:52.722 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3% 00:24:52.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.722 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.722 issued rwts: total=940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.722 job5: (groupid=0, jobs=1): err= 0: pid=3119915: Fri Nov 29 21:54:23 2024 00:24:52.722 read: IOPS=116, BW=117MiB/s (122MB/s)(1169MiB/10034msec) 00:24:52.722 slat (usec): min=44, max=116690, avg=8551.33, stdev=14469.60 00:24:52.722 clat (msec): min=31, max=2037, avg=1035.43, stdev=508.89 00:24:52.722 lat (msec): min=43, max=2039, avg=1043.98, stdev=510.16 00:24:52.722 clat percentiles (msec): 00:24:52.722 | 1.00th=[ 129], 5.00th=[ 393], 10.00th=[ 397], 20.00th=[ 575], 00:24:52.722 | 30.00th=[ 667], 40.00th=[ 802], 50.00th=[ 927], 60.00th=[ 1133], 00:24:52.722 | 70.00th=[ 1368], 80.00th=[ 1603], 90.00th=[ 1770], 95.00th=[ 1888], 00:24:52.722 | 99.00th=[ 2005], 99.50th=[ 2005], 99.90th=[ 2039], 99.95th=[ 2039], 00:24:52.722 | 99.99th=[ 2039] 00:24:52.722 bw ( KiB/s): min=26624, max=321536, per=3.08%, avg=118522.33, stdev=82165.39, samples=18 00:24:52.722 iops : min= 26, max= 314, avg=115.67, stdev=80.23, samples=18 00:24:52.722 lat (msec) : 50=0.34%, 100=0.60%, 250=1.20%, 500=15.91%, 750=17.71% 00:24:52.722 lat (msec) : 1000=17.54%, 2000=45.85%, >=2000=0.86% 00:24:52.722 cpu : usr=0.04%, sys=1.68%, ctx=2270, majf=0, minf=32769 00:24:52.722 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.6% 00:24:52.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.722 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.722 issued rwts: total=1169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.722 job5: (groupid=0, jobs=1): err= 0: pid=3119916: Fri Nov 29 21:54:23 2024 00:24:52.722 read: IOPS=73, BW=74.0MiB/s (77.6MB/s)(747MiB/10095msec) 00:24:52.722 slat (usec): min=76, max=1329.7k, avg=13425.64, stdev=51637.59 00:24:52.722 clat (msec): min=61, max=3761, avg=1648.53, stdev=1039.29 00:24:52.722 lat (msec): min=96, max=3765, avg=1661.96, stdev=1043.26 00:24:52.722 clat percentiles (msec): 00:24:52.722 | 1.00th=[ 247], 5.00th=[ 388], 10.00th=[ 414], 20.00th=[ 584], 00:24:52.722 | 30.00th=[ 944], 40.00th=[ 1234], 50.00th=[ 1418], 60.00th=[ 1687], 00:24:52.722 | 70.00th=[ 2232], 80.00th=[ 2467], 90.00th=[ 3473], 95.00th=[ 3574], 00:24:52.722 | 99.00th=[ 3708], 99.50th=[ 3742], 99.90th=[ 3775], 99.95th=[ 3775], 00:24:52.723 | 99.99th=[ 3775] 00:24:52.723 bw ( KiB/s): min=18432, max=206848, per=1.94%, avg=74555.59, stdev=44837.56, samples=17 00:24:52.723 iops : min= 18, max= 202, avg=72.71, stdev=43.81, samples=17 00:24:52.723 lat (msec) : 100=0.27%, 250=0.94%, 500=15.93%, 750=7.63%, 1000=6.83% 00:24:52.723 lat (msec) : 2000=35.61%, >=2000=32.80% 00:24:52.723 cpu : usr=0.03%, sys=1.65%, ctx=1966, majf=0, minf=32769 00:24:52.723 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:24:52.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.723 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:24:52.723 issued rwts: total=747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.723 job5: (groupid=0, jobs=1): err= 0: pid=3119917: Fri Nov 29 21:54:23 2024 00:24:52.723 read: IOPS=39, BW=39.2MiB/s (41.1MB/s)(393MiB/10033msec) 00:24:52.723 slat (usec): min=56, max=2055.8k, avg=25441.00, stdev=147314.08 00:24:52.723 clat (msec): min=31, max=7565, avg=3043.42, stdev=2304.62 00:24:52.723 lat (msec): min=34, max=7570, avg=3068.86, stdev=2313.23 00:24:52.723 clat percentiles (msec): 00:24:52.723 | 1.00th=[ 39], 5.00th=[ 81], 10.00th=[ 159], 20.00th=[ 944], 00:24:52.723 | 30.00th=[ 1083], 40.00th=[ 1133], 50.00th=[ 3440], 60.00th=[ 4111], 00:24:52.723 | 70.00th=[ 4530], 80.00th=[ 5000], 90.00th=[ 6745], 95.00th=[ 7416], 00:24:52.723 | 99.00th=[ 7550], 99.50th=[ 7550], 99.90th=[ 7550], 99.95th=[ 7550], 00:24:52.723 | 99.99th=[ 7550] 00:24:52.723 bw ( KiB/s): min=12288, max=116736, per=1.18%, avg=45388.75, stdev=31687.70, samples=12 00:24:52.723 iops : min= 12, max= 114, avg=44.25, stdev=30.93, samples=12 00:24:52.723 lat (msec) : 50=1.27%, 100=7.12%, 250=3.56%, 500=2.54%, 750=3.56% 00:24:52.723 lat (msec) : 1000=2.80%, 2000=24.68%, >=2000=54.45% 00:24:52.723 cpu : usr=0.01%, sys=1.45%, ctx=1073, majf=0, minf=32769 00:24:52.723 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.0% 00:24:52.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.723 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:24:52.723 issued rwts: total=393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.723 00:24:52.723 Run status group 0 (all jobs): 00:24:52.723 READ: bw=3758MiB/s (3941MB/s), 2930KiB/s-205MiB/s (3000kB/s-215MB/s), io=38.7GiB (41.6GB), run=10021-10548msec 00:24:52.723 00:24:52.723 Disk stats (read/write): 00:24:52.723 nvme0n1: ios=78873/0, merge=0/0, ticks=7975670/0, in_queue=7975670, util=97.72% 00:24:52.723 nvme1n1: ios=20625/0, merge=0/0, ticks=5788969/0, in_queue=5788969, util=98.38% 00:24:52.723 nvme2n1: ios=60815/0, merge=0/0, ticks=6340200/0, in_queue=6340200, util=98.55% 00:24:52.723 nvme3n1: ios=27452/0, merge=0/0, ticks=7094919/0, in_queue=7094919, util=98.46% 00:24:52.723 nvme4n1: ios=43260/0, merge=0/0, ticks=5771297/0, in_queue=5771297, util=98.94% 00:24:52.723 nvme5n1: ios=84132/0, merge=0/0, ticks=6961579/0, in_queue=6961579, util=98.96% 00:24:52.723 21:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:24:52.723 21:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:24:52.723 21:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:24:52.723 21:54:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:24:52.987 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:24:52.987 21:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:24:52.987 21:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:24:52.987 21:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:52.987 21:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:24:52.987 21:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:52.987 21:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:24:52.987 21:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:24:52.987 21:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:52.987 21:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.987 21:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:52.987 21:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.987 21:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:24:52.987 21:54:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:53.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:53.921 21:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:24:53.921 21:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:24:53.921 21:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:53.921 21:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:24:53.921 21:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:24:53.921 21:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:53.921 21:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:24:53.921 21:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:53.921 21:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.921 21:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:53.921 21:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.921 21:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:24:53.921 21:54:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:54.855 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:54.855 21:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:24:54.855 21:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:24:54.855 21:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:54.855 21:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:24:54.855 21:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:54.855 21:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:24:54.855 21:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:24:54.855 21:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:54.855 21:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.855 21:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:55.113 21:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.113 21:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:24:55.114 21:54:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:56.048 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:56.048 21:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:24:56.048 21:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:24:56.048 21:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:56.048 21:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:24:56.048 21:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:56.048 21:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:24:56.048 21:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:24:56.048 21:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:56.048 21:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.048 21:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:56.048 21:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.048 21:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:24:56.048 21:54:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:57.007 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:57.007 21:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:24:57.007 21:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:24:57.007 21:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:57.007 21:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:24:57.007 21:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:24:57.007 21:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:57.008 21:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:24:57.008 21:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:57.008 21:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.008 21:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:57.008 21:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.008 21:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:24:57.008 21:54:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:58.099 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:58.099 rmmod nvme_rdma 00:24:58.099 rmmod nvme_fabrics 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@513 -- # '[' -n 3118375 ']' 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@514 -- # killprocess 3118375 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # '[' -z 3118375 ']' 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # kill -0 3118375 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # uname 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3118375 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3118375' 00:24:58.099 killing process with pid 3118375 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@969 -- # kill 3118375 00:24:58.099 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@974 -- # wait 3118375 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:24:58.668 00:24:58.668 real 0m32.163s 00:24:58.668 user 1m51.018s 00:24:58.668 sys 0m17.825s 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:24:58.668 ************************************ 00:24:58.668 END TEST nvmf_srq_overwhelm 00:24:58.668 ************************************ 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:58.668 ************************************ 00:24:58.668 START TEST nvmf_shutdown 00:24:58.668 ************************************ 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:24:58.668 * Looking for test storage... 00:24:58.668 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:58.668 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:58.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.669 --rc genhtml_branch_coverage=1 00:24:58.669 --rc genhtml_function_coverage=1 00:24:58.669 --rc genhtml_legend=1 00:24:58.669 --rc geninfo_all_blocks=1 00:24:58.669 --rc geninfo_unexecuted_blocks=1 00:24:58.669 00:24:58.669 ' 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:58.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.669 --rc genhtml_branch_coverage=1 00:24:58.669 --rc genhtml_function_coverage=1 00:24:58.669 --rc genhtml_legend=1 00:24:58.669 --rc geninfo_all_blocks=1 00:24:58.669 --rc geninfo_unexecuted_blocks=1 00:24:58.669 00:24:58.669 ' 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:58.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.669 --rc genhtml_branch_coverage=1 00:24:58.669 --rc genhtml_function_coverage=1 00:24:58.669 --rc genhtml_legend=1 00:24:58.669 --rc geninfo_all_blocks=1 00:24:58.669 --rc geninfo_unexecuted_blocks=1 00:24:58.669 00:24:58.669 ' 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:58.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.669 --rc genhtml_branch_coverage=1 00:24:58.669 --rc genhtml_function_coverage=1 00:24:58.669 --rc genhtml_legend=1 00:24:58.669 --rc geninfo_all_blocks=1 00:24:58.669 --rc geninfo_unexecuted_blocks=1 00:24:58.669 00:24:58.669 ' 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:58.669 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@169 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:58.669 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:58.929 ************************************ 00:24:58.929 START TEST nvmf_shutdown_tc1 00:24:58.929 ************************************ 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:58.929 21:54:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:05.495 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:05.495 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:05.495 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:05.495 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # rdma_device_init 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:05.495 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@526 -- # allocate_nic_ips 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:05.496 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:05.496 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:05.496 altname enp217s0f0np0 00:25:05.496 altname ens818f0np0 00:25:05.496 inet 192.168.100.8/24 scope global mlx_0_0 00:25:05.496 valid_lft forever preferred_lft forever 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:05.496 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:05.496 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:05.496 altname enp217s0f1np1 00:25:05.496 altname ens818f1np1 00:25:05.496 inet 192.168.100.9/24 scope global mlx_0_1 00:25:05.496 valid_lft forever preferred_lft forever 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:25:05.496 192.168.100.9' 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:25:05.496 192.168.100.9' 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # head -n 1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:25:05.496 192.168.100.9' 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # tail -n +2 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # head -n 1 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:05.496 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=3125872 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 3125872 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3125872 ']' 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:05.497 [2024-11-29 21:54:37.430455] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:05.497 [2024-11-29 21:54:37.430505] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.497 [2024-11-29 21:54:37.499436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:05.497 [2024-11-29 21:54:37.537814] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.497 [2024-11-29 21:54:37.537859] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.497 [2024-11-29 21:54:37.537868] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.497 [2024-11-29 21:54:37.537876] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.497 [2024-11-29 21:54:37.537883] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.497 [2024-11-29 21:54:37.537993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.497 [2024-11-29 21:54:37.538058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:05.497 [2024-11-29 21:54:37.538150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.497 [2024-11-29 21:54:37.538151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.497 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:05.497 [2024-11-29 21:54:37.722410] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa2a250/0xa2e700) succeed. 00:25:05.497 [2024-11-29 21:54:37.732952] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa2b840/0xa6fda0) succeed. 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.756 21:54:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:05.756 Malloc1 00:25:05.756 [2024-11-29 21:54:37.956549] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:05.756 Malloc2 00:25:06.015 Malloc3 00:25:06.015 Malloc4 00:25:06.015 Malloc5 00:25:06.015 Malloc6 00:25:06.015 Malloc7 00:25:06.015 Malloc8 00:25:06.274 Malloc9 00:25:06.274 Malloc10 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3126167 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3126167 /var/tmp/bdevperf.sock 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3126167 ']' 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:06.274 { 00:25:06.274 "params": { 00:25:06.274 "name": "Nvme$subsystem", 00:25:06.274 "trtype": "$TEST_TRANSPORT", 00:25:06.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.274 "adrfam": "ipv4", 00:25:06.274 "trsvcid": "$NVMF_PORT", 00:25:06.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.274 "hdgst": ${hdgst:-false}, 00:25:06.274 "ddgst": ${ddgst:-false} 00:25:06.274 }, 00:25:06.274 "method": "bdev_nvme_attach_controller" 00:25:06.274 } 00:25:06.274 EOF 00:25:06.274 )") 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:06.274 { 00:25:06.274 "params": { 00:25:06.274 "name": "Nvme$subsystem", 00:25:06.274 "trtype": "$TEST_TRANSPORT", 00:25:06.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.274 "adrfam": "ipv4", 00:25:06.274 "trsvcid": "$NVMF_PORT", 00:25:06.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.274 "hdgst": ${hdgst:-false}, 00:25:06.274 "ddgst": ${ddgst:-false} 00:25:06.274 }, 00:25:06.274 "method": "bdev_nvme_attach_controller" 00:25:06.274 } 00:25:06.274 EOF 00:25:06.274 )") 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:06.274 { 00:25:06.274 "params": { 00:25:06.274 "name": "Nvme$subsystem", 00:25:06.274 "trtype": "$TEST_TRANSPORT", 00:25:06.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.274 "adrfam": "ipv4", 00:25:06.274 "trsvcid": "$NVMF_PORT", 00:25:06.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.274 "hdgst": ${hdgst:-false}, 00:25:06.274 "ddgst": ${ddgst:-false} 00:25:06.274 }, 00:25:06.274 "method": "bdev_nvme_attach_controller" 00:25:06.274 } 00:25:06.274 EOF 00:25:06.274 )") 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:06.274 { 00:25:06.274 "params": { 00:25:06.274 "name": "Nvme$subsystem", 00:25:06.274 "trtype": "$TEST_TRANSPORT", 00:25:06.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.274 "adrfam": "ipv4", 00:25:06.274 "trsvcid": "$NVMF_PORT", 00:25:06.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.274 "hdgst": ${hdgst:-false}, 00:25:06.274 "ddgst": ${ddgst:-false} 00:25:06.274 }, 00:25:06.274 "method": "bdev_nvme_attach_controller" 00:25:06.274 } 00:25:06.274 EOF 00:25:06.274 )") 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:06.274 { 00:25:06.274 "params": { 00:25:06.274 "name": "Nvme$subsystem", 00:25:06.274 "trtype": "$TEST_TRANSPORT", 00:25:06.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.274 "adrfam": "ipv4", 00:25:06.274 "trsvcid": "$NVMF_PORT", 00:25:06.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.274 "hdgst": ${hdgst:-false}, 00:25:06.274 "ddgst": ${ddgst:-false} 00:25:06.274 }, 00:25:06.274 "method": "bdev_nvme_attach_controller" 00:25:06.274 } 00:25:06.274 EOF 00:25:06.274 )") 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:06.274 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:06.275 { 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme$subsystem", 00:25:06.275 "trtype": "$TEST_TRANSPORT", 00:25:06.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "$NVMF_PORT", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.275 "hdgst": ${hdgst:-false}, 00:25:06.275 "ddgst": ${ddgst:-false} 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 } 00:25:06.275 EOF 00:25:06.275 )") 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:06.275 [2024-11-29 21:54:38.452659] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:06.275 [2024-11-29 21:54:38.452720] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:06.275 { 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme$subsystem", 00:25:06.275 "trtype": "$TEST_TRANSPORT", 00:25:06.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "$NVMF_PORT", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.275 "hdgst": ${hdgst:-false}, 00:25:06.275 "ddgst": ${ddgst:-false} 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 } 00:25:06.275 EOF 00:25:06.275 )") 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:06.275 { 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme$subsystem", 00:25:06.275 "trtype": "$TEST_TRANSPORT", 00:25:06.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "$NVMF_PORT", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.275 "hdgst": ${hdgst:-false}, 00:25:06.275 "ddgst": ${ddgst:-false} 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 } 00:25:06.275 EOF 00:25:06.275 )") 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:06.275 { 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme$subsystem", 00:25:06.275 "trtype": "$TEST_TRANSPORT", 00:25:06.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "$NVMF_PORT", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.275 "hdgst": ${hdgst:-false}, 00:25:06.275 "ddgst": ${ddgst:-false} 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 } 00:25:06.275 EOF 00:25:06.275 )") 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:06.275 { 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme$subsystem", 00:25:06.275 "trtype": "$TEST_TRANSPORT", 00:25:06.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "$NVMF_PORT", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.275 "hdgst": ${hdgst:-false}, 00:25:06.275 "ddgst": ${ddgst:-false} 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 } 00:25:06.275 EOF 00:25:06.275 )") 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:25:06.275 21:54:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme1", 00:25:06.275 "trtype": "rdma", 00:25:06.275 "traddr": "192.168.100.8", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "4420", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:06.275 "hdgst": false, 00:25:06.275 "ddgst": false 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 },{ 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme2", 00:25:06.275 "trtype": "rdma", 00:25:06.275 "traddr": "192.168.100.8", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "4420", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:06.275 "hdgst": false, 00:25:06.275 "ddgst": false 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 },{ 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme3", 00:25:06.275 "trtype": "rdma", 00:25:06.275 "traddr": "192.168.100.8", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "4420", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:06.275 "hdgst": false, 00:25:06.275 "ddgst": false 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 },{ 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme4", 00:25:06.275 "trtype": "rdma", 00:25:06.275 "traddr": "192.168.100.8", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "4420", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:06.275 "hdgst": false, 00:25:06.275 "ddgst": false 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 },{ 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme5", 00:25:06.275 "trtype": "rdma", 00:25:06.275 "traddr": "192.168.100.8", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "4420", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:06.275 "hdgst": false, 00:25:06.275 "ddgst": false 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 },{ 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme6", 00:25:06.275 "trtype": "rdma", 00:25:06.275 "traddr": "192.168.100.8", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "4420", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:06.275 "hdgst": false, 00:25:06.275 "ddgst": false 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 },{ 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme7", 00:25:06.275 "trtype": "rdma", 00:25:06.275 "traddr": "192.168.100.8", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "4420", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:06.275 "hdgst": false, 00:25:06.275 "ddgst": false 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 },{ 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme8", 00:25:06.275 "trtype": "rdma", 00:25:06.275 "traddr": "192.168.100.8", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "4420", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:06.275 "hdgst": false, 00:25:06.275 "ddgst": false 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 },{ 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme9", 00:25:06.275 "trtype": "rdma", 00:25:06.275 "traddr": "192.168.100.8", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "4420", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:06.275 "hdgst": false, 00:25:06.275 "ddgst": false 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 },{ 00:25:06.275 "params": { 00:25:06.275 "name": "Nvme10", 00:25:06.275 "trtype": "rdma", 00:25:06.275 "traddr": "192.168.100.8", 00:25:06.275 "adrfam": "ipv4", 00:25:06.275 "trsvcid": "4420", 00:25:06.275 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:06.275 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:06.275 "hdgst": false, 00:25:06.275 "ddgst": false 00:25:06.275 }, 00:25:06.275 "method": "bdev_nvme_attach_controller" 00:25:06.275 }' 00:25:06.535 [2024-11-29 21:54:38.529826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.535 [2024-11-29 21:54:38.569385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.471 21:54:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:07.471 21:54:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:25:07.471 21:54:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:07.472 21:54:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.472 21:54:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:07.472 21:54:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.472 21:54:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3126167 00:25:07.472 21:54:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:25:07.472 21:54:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:25:08.409 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3126167 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3125872 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:08.409 { 00:25:08.409 "params": { 00:25:08.409 "name": "Nvme$subsystem", 00:25:08.409 "trtype": "$TEST_TRANSPORT", 00:25:08.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.409 "adrfam": "ipv4", 00:25:08.409 "trsvcid": "$NVMF_PORT", 00:25:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.409 "hdgst": ${hdgst:-false}, 00:25:08.409 "ddgst": ${ddgst:-false} 00:25:08.409 }, 00:25:08.409 "method": "bdev_nvme_attach_controller" 00:25:08.409 } 00:25:08.409 EOF 00:25:08.409 )") 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:08.409 { 00:25:08.409 "params": { 00:25:08.409 "name": "Nvme$subsystem", 00:25:08.409 "trtype": "$TEST_TRANSPORT", 00:25:08.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.409 "adrfam": "ipv4", 00:25:08.409 "trsvcid": "$NVMF_PORT", 00:25:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.409 "hdgst": ${hdgst:-false}, 00:25:08.409 "ddgst": ${ddgst:-false} 00:25:08.409 }, 00:25:08.409 "method": "bdev_nvme_attach_controller" 00:25:08.409 } 00:25:08.409 EOF 00:25:08.409 )") 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:08.409 { 00:25:08.409 "params": { 00:25:08.409 "name": "Nvme$subsystem", 00:25:08.409 "trtype": "$TEST_TRANSPORT", 00:25:08.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.409 "adrfam": "ipv4", 00:25:08.409 "trsvcid": "$NVMF_PORT", 00:25:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.409 "hdgst": ${hdgst:-false}, 00:25:08.409 "ddgst": ${ddgst:-false} 00:25:08.409 }, 00:25:08.409 "method": "bdev_nvme_attach_controller" 00:25:08.409 } 00:25:08.409 EOF 00:25:08.409 )") 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:08.409 { 00:25:08.409 "params": { 00:25:08.409 "name": "Nvme$subsystem", 00:25:08.409 "trtype": "$TEST_TRANSPORT", 00:25:08.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.409 "adrfam": "ipv4", 00:25:08.409 "trsvcid": "$NVMF_PORT", 00:25:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.409 "hdgst": ${hdgst:-false}, 00:25:08.409 "ddgst": ${ddgst:-false} 00:25:08.409 }, 00:25:08.409 "method": "bdev_nvme_attach_controller" 00:25:08.409 } 00:25:08.409 EOF 00:25:08.409 )") 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:08.409 { 00:25:08.409 "params": { 00:25:08.409 "name": "Nvme$subsystem", 00:25:08.409 "trtype": "$TEST_TRANSPORT", 00:25:08.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.409 "adrfam": "ipv4", 00:25:08.409 "trsvcid": "$NVMF_PORT", 00:25:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.409 "hdgst": ${hdgst:-false}, 00:25:08.409 "ddgst": ${ddgst:-false} 00:25:08.409 }, 00:25:08.409 "method": "bdev_nvme_attach_controller" 00:25:08.409 } 00:25:08.409 EOF 00:25:08.409 )") 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:08.409 { 00:25:08.409 "params": { 00:25:08.409 "name": "Nvme$subsystem", 00:25:08.409 "trtype": "$TEST_TRANSPORT", 00:25:08.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.409 "adrfam": "ipv4", 00:25:08.409 "trsvcid": "$NVMF_PORT", 00:25:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.409 "hdgst": ${hdgst:-false}, 00:25:08.409 "ddgst": ${ddgst:-false} 00:25:08.409 }, 00:25:08.409 "method": "bdev_nvme_attach_controller" 00:25:08.409 } 00:25:08.409 EOF 00:25:08.409 )") 00:25:08.409 [2024-11-29 21:54:40.496356] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:08.409 [2024-11-29 21:54:40.496412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126479 ] 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:08.409 { 00:25:08.409 "params": { 00:25:08.409 "name": "Nvme$subsystem", 00:25:08.409 "trtype": "$TEST_TRANSPORT", 00:25:08.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.409 "adrfam": "ipv4", 00:25:08.409 "trsvcid": "$NVMF_PORT", 00:25:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.409 "hdgst": ${hdgst:-false}, 00:25:08.409 "ddgst": ${ddgst:-false} 00:25:08.409 }, 00:25:08.409 "method": "bdev_nvme_attach_controller" 00:25:08.409 } 00:25:08.409 EOF 00:25:08.409 )") 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:08.409 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:08.410 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:08.410 { 00:25:08.410 "params": { 00:25:08.410 "name": "Nvme$subsystem", 00:25:08.410 "trtype": "$TEST_TRANSPORT", 00:25:08.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.410 "adrfam": "ipv4", 00:25:08.410 "trsvcid": "$NVMF_PORT", 00:25:08.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.410 "hdgst": ${hdgst:-false}, 00:25:08.410 "ddgst": ${ddgst:-false} 00:25:08.410 }, 00:25:08.410 "method": "bdev_nvme_attach_controller" 00:25:08.410 } 00:25:08.410 EOF 00:25:08.410 )") 00:25:08.410 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:08.410 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:08.410 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:08.410 { 00:25:08.410 "params": { 00:25:08.410 "name": "Nvme$subsystem", 00:25:08.410 "trtype": "$TEST_TRANSPORT", 00:25:08.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.410 "adrfam": "ipv4", 00:25:08.410 "trsvcid": "$NVMF_PORT", 00:25:08.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.410 "hdgst": ${hdgst:-false}, 00:25:08.410 "ddgst": ${ddgst:-false} 00:25:08.410 }, 00:25:08.410 "method": "bdev_nvme_attach_controller" 00:25:08.410 } 00:25:08.410 EOF 00:25:08.410 )") 00:25:08.410 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:08.410 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:08.410 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:08.410 { 00:25:08.410 "params": { 00:25:08.410 "name": "Nvme$subsystem", 00:25:08.410 "trtype": "$TEST_TRANSPORT", 00:25:08.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.410 "adrfam": "ipv4", 00:25:08.410 "trsvcid": "$NVMF_PORT", 00:25:08.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.410 "hdgst": ${hdgst:-false}, 00:25:08.410 "ddgst": ${ddgst:-false} 00:25:08.410 }, 00:25:08.410 "method": "bdev_nvme_attach_controller" 00:25:08.410 } 00:25:08.410 EOF 00:25:08.410 )") 00:25:08.410 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:25:08.410 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:25:08.410 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:25:08.410 21:54:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:25:08.410 "params": { 00:25:08.410 "name": "Nvme1", 00:25:08.410 "trtype": "rdma", 00:25:08.410 "traddr": "192.168.100.8", 00:25:08.410 "adrfam": "ipv4", 00:25:08.410 "trsvcid": "4420", 00:25:08.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.410 "hdgst": false, 00:25:08.410 "ddgst": false 00:25:08.410 }, 00:25:08.410 "method": "bdev_nvme_attach_controller" 00:25:08.410 },{ 00:25:08.410 "params": { 00:25:08.410 "name": "Nvme2", 00:25:08.410 "trtype": "rdma", 00:25:08.410 "traddr": "192.168.100.8", 00:25:08.410 "adrfam": "ipv4", 00:25:08.410 "trsvcid": "4420", 00:25:08.410 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:08.410 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:08.410 "hdgst": false, 00:25:08.410 "ddgst": false 00:25:08.410 }, 00:25:08.410 "method": "bdev_nvme_attach_controller" 00:25:08.410 },{ 00:25:08.410 "params": { 00:25:08.410 "name": "Nvme3", 00:25:08.410 "trtype": "rdma", 00:25:08.410 "traddr": "192.168.100.8", 00:25:08.410 "adrfam": "ipv4", 00:25:08.410 "trsvcid": "4420", 00:25:08.410 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:08.410 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:08.410 "hdgst": false, 00:25:08.410 "ddgst": false 00:25:08.410 }, 00:25:08.410 "method": "bdev_nvme_attach_controller" 00:25:08.410 },{ 00:25:08.410 "params": { 00:25:08.410 "name": "Nvme4", 00:25:08.410 "trtype": "rdma", 00:25:08.410 "traddr": "192.168.100.8", 00:25:08.410 "adrfam": "ipv4", 00:25:08.410 "trsvcid": "4420", 00:25:08.410 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:08.410 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:08.410 "hdgst": false, 00:25:08.410 "ddgst": false 00:25:08.410 }, 00:25:08.410 "method": "bdev_nvme_attach_controller" 00:25:08.410 },{ 00:25:08.410 "params": { 00:25:08.410 "name": "Nvme5", 00:25:08.410 "trtype": "rdma", 00:25:08.410 "traddr": "192.168.100.8", 00:25:08.410 "adrfam": "ipv4", 00:25:08.410 "trsvcid": "4420", 00:25:08.410 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:08.410 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:08.410 "hdgst": false, 00:25:08.410 "ddgst": false 00:25:08.410 }, 00:25:08.410 "method": "bdev_nvme_attach_controller" 00:25:08.410 },{ 00:25:08.410 "params": { 00:25:08.410 "name": "Nvme6", 00:25:08.410 "trtype": "rdma", 00:25:08.410 "traddr": "192.168.100.8", 00:25:08.410 "adrfam": "ipv4", 00:25:08.410 "trsvcid": "4420", 00:25:08.410 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:08.410 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:08.410 "hdgst": false, 00:25:08.410 "ddgst": false 00:25:08.410 }, 00:25:08.410 "method": "bdev_nvme_attach_controller" 00:25:08.410 },{ 00:25:08.410 "params": { 00:25:08.410 "name": "Nvme7", 00:25:08.410 "trtype": "rdma", 00:25:08.410 "traddr": "192.168.100.8", 00:25:08.410 "adrfam": "ipv4", 00:25:08.410 "trsvcid": "4420", 00:25:08.410 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:08.410 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:08.410 "hdgst": false, 00:25:08.410 "ddgst": false 00:25:08.410 }, 00:25:08.410 "method": "bdev_nvme_attach_controller" 00:25:08.410 },{ 00:25:08.410 "params": { 00:25:08.410 "name": "Nvme8", 00:25:08.410 "trtype": "rdma", 00:25:08.410 "traddr": "192.168.100.8", 00:25:08.410 "adrfam": "ipv4", 00:25:08.410 "trsvcid": "4420", 00:25:08.410 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:08.410 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:08.410 "hdgst": false, 00:25:08.410 "ddgst": false 00:25:08.410 }, 00:25:08.410 "method": "bdev_nvme_attach_controller" 00:25:08.410 },{ 00:25:08.410 "params": { 00:25:08.410 "name": "Nvme9", 00:25:08.410 "trtype": "rdma", 00:25:08.410 "traddr": "192.168.100.8", 00:25:08.410 "adrfam": "ipv4", 00:25:08.410 "trsvcid": "4420", 00:25:08.410 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:08.410 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:08.410 "hdgst": false, 00:25:08.410 "ddgst": false 00:25:08.410 }, 00:25:08.410 "method": "bdev_nvme_attach_controller" 00:25:08.410 },{ 00:25:08.410 "params": { 00:25:08.410 "name": "Nvme10", 00:25:08.410 "trtype": "rdma", 00:25:08.410 "traddr": "192.168.100.8", 00:25:08.410 "adrfam": "ipv4", 00:25:08.410 "trsvcid": "4420", 00:25:08.410 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:08.410 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:08.410 "hdgst": false, 00:25:08.410 "ddgst": false 00:25:08.410 }, 00:25:08.410 "method": "bdev_nvme_attach_controller" 00:25:08.410 }' 00:25:08.410 [2024-11-29 21:54:40.571477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.410 [2024-11-29 21:54:40.610345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.348 Running I/O for 1 seconds... 00:25:10.727 3587.00 IOPS, 224.19 MiB/s 00:25:10.727 Latency(us) 00:25:10.727 [2024-11-29T20:54:42.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.727 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.727 Verification LBA range: start 0x0 length 0x400 00:25:10.727 Nvme1n1 : 1.18 378.21 23.64 0.00 0.00 166600.94 10066.33 201326.59 00:25:10.727 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.727 Verification LBA range: start 0x0 length 0x400 00:25:10.727 Nvme2n1 : 1.19 377.75 23.61 0.00 0.00 164874.77 10643.05 187904.82 00:25:10.727 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.727 Verification LBA range: start 0x0 length 0x400 00:25:10.727 Nvme3n1 : 1.19 377.39 23.59 0.00 0.00 162416.17 10957.62 182032.79 00:25:10.727 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.727 Verification LBA range: start 0x0 length 0x400 00:25:10.727 Nvme4n1 : 1.19 377.01 23.56 0.00 0.00 160476.13 11219.76 175321.91 00:25:10.727 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.728 Verification LBA range: start 0x0 length 0x400 00:25:10.728 Nvme5n1 : 1.19 376.55 23.53 0.00 0.00 158925.74 11744.05 164416.72 00:25:10.728 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.728 Verification LBA range: start 0x0 length 0x400 00:25:10.728 Nvme6n1 : 1.19 403.05 25.19 0.00 0.00 146017.43 5662.31 119957.09 00:25:10.728 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.728 Verification LBA range: start 0x0 length 0x400 00:25:10.728 Nvme7n1 : 1.19 400.15 25.01 0.00 0.00 144992.18 12320.77 119957.09 00:25:10.728 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.728 Verification LBA range: start 0x0 length 0x400 00:25:10.728 Nvme8n1 : 1.19 402.28 25.14 0.00 0.00 142300.12 10433.33 120795.96 00:25:10.728 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.728 Verification LBA range: start 0x0 length 0x400 00:25:10.728 Nvme9n1 : 1.18 379.17 23.70 0.00 0.00 149537.12 9751.76 114085.07 00:25:10.728 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.728 Verification LBA range: start 0x0 length 0x400 00:25:10.728 Nvme10n1 : 1.18 324.52 20.28 0.00 0.00 172059.85 9542.04 202165.45 00:25:10.728 [2024-11-29T20:54:42.976Z] =================================================================================================================== 00:25:10.728 [2024-11-29T20:54:42.976Z] Total : 3796.07 237.25 0.00 0.00 156347.95 5662.31 202165.45 00:25:10.728 21:54:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:25:10.728 21:54:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:10.728 21:54:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:10.728 21:54:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:10.728 21:54:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:10.728 21:54:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:10.728 21:54:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:25:10.988 21:54:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:10.988 21:54:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:10.988 21:54:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:25:10.988 21:54:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.988 21:54:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:10.988 rmmod nvme_rdma 00:25:10.988 rmmod nvme_fabrics 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 3125872 ']' 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 3125872 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3125872 ']' 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3125872 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3125872 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3125872' 00:25:10.988 killing process with pid 3125872 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3125872 00:25:10.988 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3125872 00:25:11.559 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:11.559 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:25:11.559 00:25:11.559 real 0m12.611s 00:25:11.559 user 0m28.227s 00:25:11.559 sys 0m5.905s 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:11.560 ************************************ 00:25:11.560 END TEST nvmf_shutdown_tc1 00:25:11.560 ************************************ 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:11.560 ************************************ 00:25:11.560 START TEST nvmf_shutdown_tc2 00:25:11.560 ************************************ 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:11.560 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:11.560 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:11.560 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:11.561 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:11.561 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # rdma_device_init 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@526 -- # allocate_nic_ips 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:11.561 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:11.561 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:11.561 altname enp217s0f0np0 00:25:11.561 altname ens818f0np0 00:25:11.561 inet 192.168.100.8/24 scope global mlx_0_0 00:25:11.561 valid_lft forever preferred_lft forever 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:11.561 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:11.561 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:11.561 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:11.561 altname enp217s0f1np1 00:25:11.561 altname ens818f1np1 00:25:11.562 inet 192.168.100.9/24 scope global mlx_0_1 00:25:11.562 valid_lft forever preferred_lft forever 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:11.562 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:25:11.822 192.168.100.9' 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:25:11.822 192.168.100.9' 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # head -n 1 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:25:11.822 192.168.100.9' 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # tail -n +2 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # head -n 1 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3127114 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3127114 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3127114 ']' 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:11.822 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.823 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:11.823 21:54:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.823 [2024-11-29 21:54:43.928306] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:11.823 [2024-11-29 21:54:43.928363] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.823 [2024-11-29 21:54:43.997299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:11.823 [2024-11-29 21:54:44.037513] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.823 [2024-11-29 21:54:44.037555] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.823 [2024-11-29 21:54:44.037564] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.823 [2024-11-29 21:54:44.037572] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.823 [2024-11-29 21:54:44.037579] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.823 [2024-11-29 21:54:44.037624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.823 [2024-11-29 21:54:44.037711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.823 [2024-11-29 21:54:44.037781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.823 [2024-11-29 21:54:44.037783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:25:12.083 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:12.083 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:25:12.083 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:12.083 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:12.084 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.084 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.084 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:12.084 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.084 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.084 [2024-11-29 21:54:44.207287] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6b7250/0x6bb700) succeed. 00:25:12.084 [2024-11-29 21:54:44.217755] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6b8840/0x6fcda0) succeed. 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:12.344 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:12.345 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:12.345 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:12.345 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:12.345 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:12.345 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:12.345 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:12.345 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.345 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.345 Malloc1 00:25:12.345 [2024-11-29 21:54:44.444461] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:12.345 Malloc2 00:25:12.345 Malloc3 00:25:12.345 Malloc4 00:25:12.604 Malloc5 00:25:12.604 Malloc6 00:25:12.604 Malloc7 00:25:12.604 Malloc8 00:25:12.604 Malloc9 00:25:12.604 Malloc10 00:25:12.604 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.604 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:12.604 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:12.604 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3127418 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3127418 /var/tmp/bdevperf.sock 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3127418 ']' 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:12.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:12.863 { 00:25:12.863 "params": { 00:25:12.863 "name": "Nvme$subsystem", 00:25:12.863 "trtype": "$TEST_TRANSPORT", 00:25:12.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.863 "adrfam": "ipv4", 00:25:12.863 "trsvcid": "$NVMF_PORT", 00:25:12.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.863 "hdgst": ${hdgst:-false}, 00:25:12.863 "ddgst": ${ddgst:-false} 00:25:12.863 }, 00:25:12.863 "method": "bdev_nvme_attach_controller" 00:25:12.863 } 00:25:12.863 EOF 00:25:12.863 )") 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:12.863 { 00:25:12.863 "params": { 00:25:12.863 "name": "Nvme$subsystem", 00:25:12.863 "trtype": "$TEST_TRANSPORT", 00:25:12.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.863 "adrfam": "ipv4", 00:25:12.863 "trsvcid": "$NVMF_PORT", 00:25:12.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.863 "hdgst": ${hdgst:-false}, 00:25:12.863 "ddgst": ${ddgst:-false} 00:25:12.863 }, 00:25:12.863 "method": "bdev_nvme_attach_controller" 00:25:12.863 } 00:25:12.863 EOF 00:25:12.863 )") 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:12.863 { 00:25:12.863 "params": { 00:25:12.863 "name": "Nvme$subsystem", 00:25:12.863 "trtype": "$TEST_TRANSPORT", 00:25:12.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.863 "adrfam": "ipv4", 00:25:12.863 "trsvcid": "$NVMF_PORT", 00:25:12.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.863 "hdgst": ${hdgst:-false}, 00:25:12.863 "ddgst": ${ddgst:-false} 00:25:12.863 }, 00:25:12.863 "method": "bdev_nvme_attach_controller" 00:25:12.863 } 00:25:12.863 EOF 00:25:12.863 )") 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:12.863 { 00:25:12.863 "params": { 00:25:12.863 "name": "Nvme$subsystem", 00:25:12.863 "trtype": "$TEST_TRANSPORT", 00:25:12.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.863 "adrfam": "ipv4", 00:25:12.863 "trsvcid": "$NVMF_PORT", 00:25:12.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.863 "hdgst": ${hdgst:-false}, 00:25:12.863 "ddgst": ${ddgst:-false} 00:25:12.863 }, 00:25:12.863 "method": "bdev_nvme_attach_controller" 00:25:12.863 } 00:25:12.863 EOF 00:25:12.863 )") 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:12.863 { 00:25:12.863 "params": { 00:25:12.863 "name": "Nvme$subsystem", 00:25:12.863 "trtype": "$TEST_TRANSPORT", 00:25:12.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.863 "adrfam": "ipv4", 00:25:12.863 "trsvcid": "$NVMF_PORT", 00:25:12.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.863 "hdgst": ${hdgst:-false}, 00:25:12.863 "ddgst": ${ddgst:-false} 00:25:12.863 }, 00:25:12.863 "method": "bdev_nvme_attach_controller" 00:25:12.863 } 00:25:12.863 EOF 00:25:12.863 )") 00:25:12.863 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:12.863 [2024-11-29 21:54:44.937980] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:12.864 [2024-11-29 21:54:44.938034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127418 ] 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:12.864 { 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme$subsystem", 00:25:12.864 "trtype": "$TEST_TRANSPORT", 00:25:12.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "$NVMF_PORT", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.864 "hdgst": ${hdgst:-false}, 00:25:12.864 "ddgst": ${ddgst:-false} 00:25:12.864 }, 00:25:12.864 "method": "bdev_nvme_attach_controller" 00:25:12.864 } 00:25:12.864 EOF 00:25:12.864 )") 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:12.864 { 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme$subsystem", 00:25:12.864 "trtype": "$TEST_TRANSPORT", 00:25:12.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "$NVMF_PORT", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.864 "hdgst": ${hdgst:-false}, 00:25:12.864 "ddgst": ${ddgst:-false} 00:25:12.864 }, 00:25:12.864 "method": "bdev_nvme_attach_controller" 00:25:12.864 } 00:25:12.864 EOF 00:25:12.864 )") 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:12.864 { 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme$subsystem", 00:25:12.864 "trtype": "$TEST_TRANSPORT", 00:25:12.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "$NVMF_PORT", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.864 "hdgst": ${hdgst:-false}, 00:25:12.864 "ddgst": ${ddgst:-false} 00:25:12.864 }, 00:25:12.864 "method": "bdev_nvme_attach_controller" 00:25:12.864 } 00:25:12.864 EOF 00:25:12.864 )") 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:12.864 { 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme$subsystem", 00:25:12.864 "trtype": "$TEST_TRANSPORT", 00:25:12.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "$NVMF_PORT", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.864 "hdgst": ${hdgst:-false}, 00:25:12.864 "ddgst": ${ddgst:-false} 00:25:12.864 }, 00:25:12.864 "method": "bdev_nvme_attach_controller" 00:25:12.864 } 00:25:12.864 EOF 00:25:12.864 )") 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:12.864 { 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme$subsystem", 00:25:12.864 "trtype": "$TEST_TRANSPORT", 00:25:12.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "$NVMF_PORT", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.864 "hdgst": ${hdgst:-false}, 00:25:12.864 "ddgst": ${ddgst:-false} 00:25:12.864 }, 00:25:12.864 "method": "bdev_nvme_attach_controller" 00:25:12.864 } 00:25:12.864 EOF 00:25:12.864 )") 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:25:12.864 21:54:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme1", 00:25:12.864 "trtype": "rdma", 00:25:12.864 "traddr": "192.168.100.8", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "4420", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:12.864 "hdgst": false, 00:25:12.864 "ddgst": false 00:25:12.864 }, 00:25:12.864 "method": "bdev_nvme_attach_controller" 00:25:12.864 },{ 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme2", 00:25:12.864 "trtype": "rdma", 00:25:12.864 "traddr": "192.168.100.8", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "4420", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:12.864 "hdgst": false, 00:25:12.864 "ddgst": false 00:25:12.864 }, 00:25:12.864 "method": "bdev_nvme_attach_controller" 00:25:12.864 },{ 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme3", 00:25:12.864 "trtype": "rdma", 00:25:12.864 "traddr": "192.168.100.8", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "4420", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:12.864 "hdgst": false, 00:25:12.864 "ddgst": false 00:25:12.864 }, 00:25:12.864 "method": "bdev_nvme_attach_controller" 00:25:12.864 },{ 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme4", 00:25:12.864 "trtype": "rdma", 00:25:12.864 "traddr": "192.168.100.8", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "4420", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:12.864 "hdgst": false, 00:25:12.864 "ddgst": false 00:25:12.864 }, 00:25:12.864 "method": "bdev_nvme_attach_controller" 00:25:12.864 },{ 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme5", 00:25:12.864 "trtype": "rdma", 00:25:12.864 "traddr": "192.168.100.8", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "4420", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:12.864 "hdgst": false, 00:25:12.864 "ddgst": false 00:25:12.864 }, 00:25:12.864 "method": "bdev_nvme_attach_controller" 00:25:12.864 },{ 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme6", 00:25:12.864 "trtype": "rdma", 00:25:12.864 "traddr": "192.168.100.8", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "4420", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:12.864 "hdgst": false, 00:25:12.864 "ddgst": false 00:25:12.864 }, 00:25:12.864 "method": "bdev_nvme_attach_controller" 00:25:12.864 },{ 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme7", 00:25:12.864 "trtype": "rdma", 00:25:12.864 "traddr": "192.168.100.8", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "4420", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:12.864 "hdgst": false, 00:25:12.864 "ddgst": false 00:25:12.864 }, 00:25:12.864 "method": "bdev_nvme_attach_controller" 00:25:12.864 },{ 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme8", 00:25:12.864 "trtype": "rdma", 00:25:12.864 "traddr": "192.168.100.8", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "4420", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:12.864 "hdgst": false, 00:25:12.864 "ddgst": false 00:25:12.864 }, 00:25:12.864 "method": "bdev_nvme_attach_controller" 00:25:12.864 },{ 00:25:12.864 "params": { 00:25:12.864 "name": "Nvme9", 00:25:12.864 "trtype": "rdma", 00:25:12.864 "traddr": "192.168.100.8", 00:25:12.864 "adrfam": "ipv4", 00:25:12.864 "trsvcid": "4420", 00:25:12.864 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:12.864 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:12.864 "hdgst": false, 00:25:12.864 "ddgst": false 00:25:12.864 }, 00:25:12.865 "method": "bdev_nvme_attach_controller" 00:25:12.865 },{ 00:25:12.865 "params": { 00:25:12.865 "name": "Nvme10", 00:25:12.865 "trtype": "rdma", 00:25:12.865 "traddr": "192.168.100.8", 00:25:12.865 "adrfam": "ipv4", 00:25:12.865 "trsvcid": "4420", 00:25:12.865 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:12.865 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:12.865 "hdgst": false, 00:25:12.865 "ddgst": false 00:25:12.865 }, 00:25:12.865 "method": "bdev_nvme_attach_controller" 00:25:12.865 }' 00:25:12.865 [2024-11-29 21:54:45.011323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.865 [2024-11-29 21:54:45.050282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.798 Running I/O for 10 seconds... 00:25:13.798 21:54:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:13.798 21:54:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:25:13.798 21:54:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:13.798 21:54:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.798 21:54:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:14.056 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:14.314 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:14.314 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:14.314 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:14.314 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:14.314 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.314 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3127418 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3127418 ']' 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3127418 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3127418 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3127418' 00:25:14.572 killing process with pid 3127418 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3127418 00:25:14.572 21:54:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3127418 00:25:14.831 Received shutdown signal, test time was about 0.946201 seconds 00:25:14.831 00:25:14.831 Latency(us) 00:25:14.831 [2024-11-29T20:54:47.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.831 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.831 Verification LBA range: start 0x0 length 0x400 00:25:14.831 Nvme1n1 : 0.93 294.73 18.42 0.00 0.00 212766.65 7811.89 211392.92 00:25:14.831 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.831 Verification LBA range: start 0x0 length 0x400 00:25:14.831 Nvme2n1 : 0.93 291.22 18.20 0.00 0.00 211417.02 7759.46 198810.01 00:25:14.831 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.831 Verification LBA range: start 0x0 length 0x400 00:25:14.831 Nvme3n1 : 0.94 307.96 19.25 0.00 0.00 196985.20 7864.32 193776.84 00:25:14.831 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.831 Verification LBA range: start 0x0 length 0x400 00:25:14.831 Nvme4n1 : 0.94 341.73 21.36 0.00 0.00 174459.08 5452.60 153511.53 00:25:14.831 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.831 Verification LBA range: start 0x0 length 0x400 00:25:14.831 Nvme5n1 : 0.94 341.15 21.32 0.00 0.00 172201.98 8808.04 146800.64 00:25:14.831 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.831 Verification LBA range: start 0x0 length 0x400 00:25:14.831 Nvme6n1 : 0.94 340.62 21.29 0.00 0.00 169391.39 9594.47 136734.31 00:25:14.831 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.831 Verification LBA range: start 0x0 length 0x400 00:25:14.831 Nvme7n1 : 0.94 340.20 21.26 0.00 0.00 166032.34 9961.47 131701.15 00:25:14.831 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.831 Verification LBA range: start 0x0 length 0x400 00:25:14.831 Nvme8n1 : 0.94 339.69 21.23 0.00 0.00 163648.80 10433.33 127506.84 00:25:14.831 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.831 Verification LBA range: start 0x0 length 0x400 00:25:14.831 Nvme9n1 : 0.94 339.08 21.19 0.00 0.00 161526.91 11324.62 114085.07 00:25:14.831 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.831 Verification LBA range: start 0x0 length 0x400 00:25:14.831 Nvme10n1 : 0.95 270.78 16.92 0.00 0.00 198297.29 8650.75 248302.80 00:25:14.831 [2024-11-29T20:54:47.079Z] =================================================================================================================== 00:25:14.831 [2024-11-29T20:54:47.079Z] Total : 3207.17 200.45 0.00 0.00 181280.46 5452.60 248302.80 00:25:15.089 21:54:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3127114 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:16.024 rmmod nvme_rdma 00:25:16.024 rmmod nvme_fabrics 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 3127114 ']' 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 3127114 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3127114 ']' 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3127114 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:16.024 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3127114 00:25:16.284 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:16.284 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:16.284 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3127114' 00:25:16.284 killing process with pid 3127114 00:25:16.284 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3127114 00:25:16.284 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3127114 00:25:16.543 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:16.543 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:25:16.543 00:25:16.543 real 0m5.128s 00:25:16.543 user 0m20.521s 00:25:16.543 sys 0m1.151s 00:25:16.543 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:16.543 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:16.543 ************************************ 00:25:16.543 END TEST nvmf_shutdown_tc2 00:25:16.543 ************************************ 00:25:16.543 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@171 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:16.543 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:16.543 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:16.543 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:16.805 ************************************ 00:25:16.805 START TEST nvmf_shutdown_tc3 00:25:16.805 ************************************ 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.805 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:16.806 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:16.806 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:16.806 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:16.806 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # rdma_device_init 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@526 -- # allocate_nic_ips 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:16.806 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:16.807 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:16.807 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:16.807 altname enp217s0f0np0 00:25:16.807 altname ens818f0np0 00:25:16.807 inet 192.168.100.8/24 scope global mlx_0_0 00:25:16.807 valid_lft forever preferred_lft forever 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:16.807 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:16.807 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:16.807 altname enp217s0f1np1 00:25:16.807 altname ens818f1np1 00:25:16.807 inet 192.168.100.9/24 scope global mlx_0_1 00:25:16.807 valid_lft forever preferred_lft forever 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:16.807 21:54:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:25:16.807 192.168.100.9' 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:25:16.807 192.168.100.9' 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # head -n 1 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:25:16.807 192.168.100.9' 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # tail -n +2 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # head -n 1 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:16.807 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:16.808 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:16.808 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:16.808 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=3128087 00:25:16.808 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 3128087 00:25:16.808 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3128087 ']' 00:25:16.808 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.808 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:16.808 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.808 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:16.808 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:16.808 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:17.067 [2024-11-29 21:54:49.090662] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:17.067 [2024-11-29 21:54:49.090716] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.067 [2024-11-29 21:54:49.160007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.067 [2024-11-29 21:54:49.199226] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.067 [2024-11-29 21:54:49.199269] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.067 [2024-11-29 21:54:49.199278] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.067 [2024-11-29 21:54:49.199286] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.067 [2024-11-29 21:54:49.199308] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.067 [2024-11-29 21:54:49.199407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.067 [2024-11-29 21:54:49.199500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:17.067 [2024-11-29 21:54:49.199610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.067 [2024-11-29 21:54:49.199612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:25:17.067 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:17.067 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:25:17.067 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:17.067 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:17.067 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.326 [2024-11-29 21:54:49.378432] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7cf250/0x7d3700) succeed. 00:25:17.326 [2024-11-29 21:54:49.388935] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7d0840/0x814da0) succeed. 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.326 21:54:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.586 Malloc1 00:25:17.586 [2024-11-29 21:54:49.609968] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:17.586 Malloc2 00:25:17.586 Malloc3 00:25:17.586 Malloc4 00:25:17.586 Malloc5 00:25:17.586 Malloc6 00:25:17.845 Malloc7 00:25:17.845 Malloc8 00:25:17.845 Malloc9 00:25:17.845 Malloc10 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3128394 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3128394 /var/tmp/bdevperf.sock 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3128394 ']' 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:17.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:17.845 { 00:25:17.845 "params": { 00:25:17.845 "name": "Nvme$subsystem", 00:25:17.845 "trtype": "$TEST_TRANSPORT", 00:25:17.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.845 "adrfam": "ipv4", 00:25:17.845 "trsvcid": "$NVMF_PORT", 00:25:17.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.845 "hdgst": ${hdgst:-false}, 00:25:17.845 "ddgst": ${ddgst:-false} 00:25:17.845 }, 00:25:17.845 "method": "bdev_nvme_attach_controller" 00:25:17.845 } 00:25:17.845 EOF 00:25:17.845 )") 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:17.845 { 00:25:17.845 "params": { 00:25:17.845 "name": "Nvme$subsystem", 00:25:17.845 "trtype": "$TEST_TRANSPORT", 00:25:17.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.845 "adrfam": "ipv4", 00:25:17.845 "trsvcid": "$NVMF_PORT", 00:25:17.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.845 "hdgst": ${hdgst:-false}, 00:25:17.845 "ddgst": ${ddgst:-false} 00:25:17.845 }, 00:25:17.845 "method": "bdev_nvme_attach_controller" 00:25:17.845 } 00:25:17.845 EOF 00:25:17.845 )") 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:17.845 { 00:25:17.845 "params": { 00:25:17.845 "name": "Nvme$subsystem", 00:25:17.845 "trtype": "$TEST_TRANSPORT", 00:25:17.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.845 "adrfam": "ipv4", 00:25:17.845 "trsvcid": "$NVMF_PORT", 00:25:17.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.845 "hdgst": ${hdgst:-false}, 00:25:17.845 "ddgst": ${ddgst:-false} 00:25:17.845 }, 00:25:17.845 "method": "bdev_nvme_attach_controller" 00:25:17.845 } 00:25:17.845 EOF 00:25:17.845 )") 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:17.845 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:17.845 { 00:25:17.845 "params": { 00:25:17.845 "name": "Nvme$subsystem", 00:25:17.845 "trtype": "$TEST_TRANSPORT", 00:25:17.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.845 "adrfam": "ipv4", 00:25:17.845 "trsvcid": "$NVMF_PORT", 00:25:17.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.845 "hdgst": ${hdgst:-false}, 00:25:17.845 "ddgst": ${ddgst:-false} 00:25:17.845 }, 00:25:17.846 "method": "bdev_nvme_attach_controller" 00:25:17.846 } 00:25:17.846 EOF 00:25:17.846 )") 00:25:17.846 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:17.846 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:17.846 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:17.846 { 00:25:17.846 "params": { 00:25:17.846 "name": "Nvme$subsystem", 00:25:17.846 "trtype": "$TEST_TRANSPORT", 00:25:17.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.846 "adrfam": "ipv4", 00:25:17.846 "trsvcid": "$NVMF_PORT", 00:25:17.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.846 "hdgst": ${hdgst:-false}, 00:25:17.846 "ddgst": ${ddgst:-false} 00:25:17.846 }, 00:25:17.846 "method": "bdev_nvme_attach_controller" 00:25:17.846 } 00:25:17.846 EOF 00:25:17.846 )") 00:25:17.846 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:18.105 { 00:25:18.105 "params": { 00:25:18.105 "name": "Nvme$subsystem", 00:25:18.105 "trtype": "$TEST_TRANSPORT", 00:25:18.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.105 "adrfam": "ipv4", 00:25:18.105 "trsvcid": "$NVMF_PORT", 00:25:18.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.105 "hdgst": ${hdgst:-false}, 00:25:18.105 "ddgst": ${ddgst:-false} 00:25:18.105 }, 00:25:18.105 "method": "bdev_nvme_attach_controller" 00:25:18.105 } 00:25:18.105 EOF 00:25:18.105 )") 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:18.105 [2024-11-29 21:54:50.095143] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:18.105 [2024-11-29 21:54:50.095201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128394 ] 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:18.105 { 00:25:18.105 "params": { 00:25:18.105 "name": "Nvme$subsystem", 00:25:18.105 "trtype": "$TEST_TRANSPORT", 00:25:18.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.105 "adrfam": "ipv4", 00:25:18.105 "trsvcid": "$NVMF_PORT", 00:25:18.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.105 "hdgst": ${hdgst:-false}, 00:25:18.105 "ddgst": ${ddgst:-false} 00:25:18.105 }, 00:25:18.105 "method": "bdev_nvme_attach_controller" 00:25:18.105 } 00:25:18.105 EOF 00:25:18.105 )") 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:18.105 { 00:25:18.105 "params": { 00:25:18.105 "name": "Nvme$subsystem", 00:25:18.105 "trtype": "$TEST_TRANSPORT", 00:25:18.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.105 "adrfam": "ipv4", 00:25:18.105 "trsvcid": "$NVMF_PORT", 00:25:18.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.105 "hdgst": ${hdgst:-false}, 00:25:18.105 "ddgst": ${ddgst:-false} 00:25:18.105 }, 00:25:18.105 "method": "bdev_nvme_attach_controller" 00:25:18.105 } 00:25:18.105 EOF 00:25:18.105 )") 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:18.105 { 00:25:18.105 "params": { 00:25:18.105 "name": "Nvme$subsystem", 00:25:18.105 "trtype": "$TEST_TRANSPORT", 00:25:18.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.105 "adrfam": "ipv4", 00:25:18.105 "trsvcid": "$NVMF_PORT", 00:25:18.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.105 "hdgst": ${hdgst:-false}, 00:25:18.105 "ddgst": ${ddgst:-false} 00:25:18.105 }, 00:25:18.105 "method": "bdev_nvme_attach_controller" 00:25:18.105 } 00:25:18.105 EOF 00:25:18.105 )") 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:18.105 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:18.105 { 00:25:18.105 "params": { 00:25:18.105 "name": "Nvme$subsystem", 00:25:18.105 "trtype": "$TEST_TRANSPORT", 00:25:18.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.106 "adrfam": "ipv4", 00:25:18.106 "trsvcid": "$NVMF_PORT", 00:25:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.106 "hdgst": ${hdgst:-false}, 00:25:18.106 "ddgst": ${ddgst:-false} 00:25:18.106 }, 00:25:18.106 "method": "bdev_nvme_attach_controller" 00:25:18.106 } 00:25:18.106 EOF 00:25:18.106 )") 00:25:18.106 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:18.106 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:25:18.106 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:25:18.106 21:54:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:25:18.106 "params": { 00:25:18.106 "name": "Nvme1", 00:25:18.106 "trtype": "rdma", 00:25:18.106 "traddr": "192.168.100.8", 00:25:18.106 "adrfam": "ipv4", 00:25:18.106 "trsvcid": "4420", 00:25:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:18.106 "hdgst": false, 00:25:18.106 "ddgst": false 00:25:18.106 }, 00:25:18.106 "method": "bdev_nvme_attach_controller" 00:25:18.106 },{ 00:25:18.106 "params": { 00:25:18.106 "name": "Nvme2", 00:25:18.106 "trtype": "rdma", 00:25:18.106 "traddr": "192.168.100.8", 00:25:18.106 "adrfam": "ipv4", 00:25:18.106 "trsvcid": "4420", 00:25:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:18.106 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:18.106 "hdgst": false, 00:25:18.106 "ddgst": false 00:25:18.106 }, 00:25:18.106 "method": "bdev_nvme_attach_controller" 00:25:18.106 },{ 00:25:18.106 "params": { 00:25:18.106 "name": "Nvme3", 00:25:18.106 "trtype": "rdma", 00:25:18.106 "traddr": "192.168.100.8", 00:25:18.106 "adrfam": "ipv4", 00:25:18.106 "trsvcid": "4420", 00:25:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:18.106 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:18.106 "hdgst": false, 00:25:18.106 "ddgst": false 00:25:18.106 }, 00:25:18.106 "method": "bdev_nvme_attach_controller" 00:25:18.106 },{ 00:25:18.106 "params": { 00:25:18.106 "name": "Nvme4", 00:25:18.106 "trtype": "rdma", 00:25:18.106 "traddr": "192.168.100.8", 00:25:18.106 "adrfam": "ipv4", 00:25:18.106 "trsvcid": "4420", 00:25:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:18.106 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:18.106 "hdgst": false, 00:25:18.106 "ddgst": false 00:25:18.106 }, 00:25:18.106 "method": "bdev_nvme_attach_controller" 00:25:18.106 },{ 00:25:18.106 "params": { 00:25:18.106 "name": "Nvme5", 00:25:18.106 "trtype": "rdma", 00:25:18.106 "traddr": "192.168.100.8", 00:25:18.106 "adrfam": "ipv4", 00:25:18.106 "trsvcid": "4420", 00:25:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:18.106 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:18.106 "hdgst": false, 00:25:18.106 "ddgst": false 00:25:18.106 }, 00:25:18.106 "method": "bdev_nvme_attach_controller" 00:25:18.106 },{ 00:25:18.106 "params": { 00:25:18.106 "name": "Nvme6", 00:25:18.106 "trtype": "rdma", 00:25:18.106 "traddr": "192.168.100.8", 00:25:18.106 "adrfam": "ipv4", 00:25:18.106 "trsvcid": "4420", 00:25:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:18.106 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:18.106 "hdgst": false, 00:25:18.106 "ddgst": false 00:25:18.106 }, 00:25:18.106 "method": "bdev_nvme_attach_controller" 00:25:18.106 },{ 00:25:18.106 "params": { 00:25:18.106 "name": "Nvme7", 00:25:18.106 "trtype": "rdma", 00:25:18.106 "traddr": "192.168.100.8", 00:25:18.106 "adrfam": "ipv4", 00:25:18.106 "trsvcid": "4420", 00:25:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:18.106 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:18.106 "hdgst": false, 00:25:18.106 "ddgst": false 00:25:18.106 }, 00:25:18.106 "method": "bdev_nvme_attach_controller" 00:25:18.106 },{ 00:25:18.106 "params": { 00:25:18.106 "name": "Nvme8", 00:25:18.106 "trtype": "rdma", 00:25:18.106 "traddr": "192.168.100.8", 00:25:18.106 "adrfam": "ipv4", 00:25:18.106 "trsvcid": "4420", 00:25:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:18.106 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:18.106 "hdgst": false, 00:25:18.106 "ddgst": false 00:25:18.106 }, 00:25:18.106 "method": "bdev_nvme_attach_controller" 00:25:18.106 },{ 00:25:18.106 "params": { 00:25:18.106 "name": "Nvme9", 00:25:18.106 "trtype": "rdma", 00:25:18.106 "traddr": "192.168.100.8", 00:25:18.106 "adrfam": "ipv4", 00:25:18.106 "trsvcid": "4420", 00:25:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:18.106 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:18.106 "hdgst": false, 00:25:18.106 "ddgst": false 00:25:18.106 }, 00:25:18.106 "method": "bdev_nvme_attach_controller" 00:25:18.106 },{ 00:25:18.106 "params": { 00:25:18.106 "name": "Nvme10", 00:25:18.106 "trtype": "rdma", 00:25:18.106 "traddr": "192.168.100.8", 00:25:18.106 "adrfam": "ipv4", 00:25:18.106 "trsvcid": "4420", 00:25:18.106 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:18.106 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:18.106 "hdgst": false, 00:25:18.106 "ddgst": false 00:25:18.106 }, 00:25:18.106 "method": "bdev_nvme_attach_controller" 00:25:18.106 }' 00:25:18.106 [2024-11-29 21:54:50.168646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.106 [2024-11-29 21:54:50.207585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.044 Running I/O for 10 seconds... 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.044 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:19.303 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.303 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:19.304 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:19.304 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=148 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 148 -ge 100 ']' 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3128087 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3128087 ']' 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3128087 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:19.563 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3128087 00:25:19.822 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:19.822 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:19.822 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3128087' 00:25:19.822 killing process with pid 3128087 00:25:19.822 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3128087 00:25:19.822 21:54:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3128087 00:25:20.081 2572.00 IOPS, 160.75 MiB/s [2024-11-29T20:54:52.329Z] 21:54:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # nvmfpid= 00:25:20.081 21:54:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # sleep 1 00:25:20.651 [2024-11-29 21:54:52.863879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.651 [2024-11-29 21:54:52.863921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:958e p:1 m:0 dnr:0 00:25:20.651 [2024-11-29 21:54:52.863936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.651 [2024-11-29 21:54:52.863951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:958e p:1 m:0 dnr:0 00:25:20.651 [2024-11-29 21:54:52.863960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.651 [2024-11-29 21:54:52.863969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:958e p:1 m:0 dnr:0 00:25:20.651 [2024-11-29 21:54:52.863978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.651 [2024-11-29 21:54:52.863987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:958e p:1 m:0 dnr:0 00:25:20.651 [2024-11-29 21:54:52.865982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:20.651 [2024-11-29 21:54:52.866044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:20.651 [2024-11-29 21:54:52.866097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.651 [2024-11-29 21:54:52.866130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:2190 p:1 m:0 dnr:0 00:25:20.651 [2024-11-29 21:54:52.866162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.866191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:2190 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.866224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.866254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:2190 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.866285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.866314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:2190 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.868280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:20.652 [2024-11-29 21:54:52.868293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:20.652 [2024-11-29 21:54:52.868319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.868328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:ce2e p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.868336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.868344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:ce2e p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.868371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.868383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:ce2e p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.868396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.868408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:ce2e p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.870477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:20.652 [2024-11-29 21:54:52.870519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:20.652 [2024-11-29 21:54:52.870572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.870604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:6af6 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.870636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.870677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:6af6 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.870710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.870740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:6af6 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.870771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.870801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:6af6 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.873046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:20.652 [2024-11-29 21:54:52.873063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:20.652 [2024-11-29 21:54:52.873084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.873098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:2176 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.873111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.873123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:2176 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.873136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.873148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:2176 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.873161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.873174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:2176 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.875512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:20.652 [2024-11-29 21:54:52.875552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:20.652 [2024-11-29 21:54:52.875601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.875633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:cd92 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.875699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.875731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:cd92 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.875770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.875799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:cd92 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.875832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.875862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:cd92 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.877962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:20.652 [2024-11-29 21:54:52.877978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.652 [2024-11-29 21:54:52.878001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.878014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:3c34 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.878028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.878040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:3c34 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.878053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.878065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:3c34 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.878077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.878089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:3c34 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.880507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:20.652 [2024-11-29 21:54:52.880548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:20.652 [2024-11-29 21:54:52.880595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.880628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:dc9e p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.880660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.880693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:dc9e p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.880707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.880719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:dc9e p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.880732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.880744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:dc9e p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.883144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:20.652 [2024-11-29 21:54:52.883184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:20.652 [2024-11-29 21:54:52.883243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.883276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:9ed6 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.883308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.883338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:9ed6 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.883369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.883399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:9ed6 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.883431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.883460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:9ed6 p:1 m:0 dnr:0 00:25:20.652 [2024-11-29 21:54:52.885993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:20.652 [2024-11-29 21:54:52.886034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:20.652 [2024-11-29 21:54:52.886090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.652 [2024-11-29 21:54:52.886122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:efd2 p:1 m:0 dnr:0 00:25:20.653 [2024-11-29 21:54:52.886155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.653 [2024-11-29 21:54:52.886184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:efd2 p:1 m:0 dnr:0 00:25:20.653 [2024-11-29 21:54:52.886216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.653 [2024-11-29 21:54:52.886245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:efd2 p:1 m:0 dnr:0 00:25:20.653 [2024-11-29 21:54:52.886277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.653 [2024-11-29 21:54:52.886307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58584 cdw0:0 sqhd:efd2 p:1 m:0 dnr:0 00:25:20.653 [2024-11-29 21:54:52.889034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:20.653 [2024-11-29 21:54:52.889051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:20.653 [2024-11-29 21:54:52.891158] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a78c40 was disconnected and freed. reset controller. 00:25:20.653 [2024-11-29 21:54:52.891202] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.653 [2024-11-29 21:54:52.893805] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a78980 was disconnected and freed. reset controller. 00:25:20.653 [2024-11-29 21:54:52.893823] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.653 [2024-11-29 21:54:52.896286] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a075c0 was disconnected and freed. reset controller. 00:25:20.653 [2024-11-29 21:54:52.896330] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.914 [2024-11-29 21:54:52.898794] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a07300 was disconnected and freed. reset controller. 00:25:20.914 [2024-11-29 21:54:52.898847] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.914 [2024-11-29 21:54:52.901142] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a07040 was disconnected and freed. reset controller. 00:25:20.914 [2024-11-29 21:54:52.901183] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.914 [2024-11-29 21:54:52.903448] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a06d80 was disconnected and freed. reset controller. 00:25:20.914 [2024-11-29 21:54:52.903491] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.914 [2024-11-29 21:54:52.905780] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a06ac0 was disconnected and freed. reset controller. 00:25:20.914 [2024-11-29 21:54:52.905798] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.914 [2024-11-29 21:54:52.907820] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019a06800 was disconnected and freed. reset controller. 00:25:20.914 [2024-11-29 21:54:52.907861] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.914 [2024-11-29 21:54:52.910179] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001a40f140 was disconnected and freed. reset controller. 00:25:20.914 [2024-11-29 21:54:52.910221] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.914 [2024-11-29 21:54:52.910465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf680 len:0x10000 key:0x184600 00:25:20.914 [2024-11-29 21:54:52.910504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.914 [2024-11-29 21:54:52.910568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf600 len:0x10000 key:0x184600 00:25:20.914 [2024-11-29 21:54:52.910603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.914 [2024-11-29 21:54:52.910645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f580 len:0x10000 key:0x184600 00:25:20.914 [2024-11-29 21:54:52.910692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.914 [2024-11-29 21:54:52.910736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8f500 len:0x10000 key:0x184600 00:25:20.914 [2024-11-29 21:54:52.910767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.914 [2024-11-29 21:54:52.910810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7f480 len:0x10000 key:0x184600 00:25:20.914 [2024-11-29 21:54:52.910841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.914 [2024-11-29 21:54:52.910884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6f400 len:0x10000 key:0x184600 00:25:20.914 [2024-11-29 21:54:52.910916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.914 [2024-11-29 21:54:52.910959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5f380 len:0x10000 key:0x184600 00:25:20.914 [2024-11-29 21:54:52.910997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.914 [2024-11-29 21:54:52.911040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4f300 len:0x10000 key:0x184600 00:25:20.914 [2024-11-29 21:54:52.911071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.914 [2024-11-29 21:54:52.911112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3f280 len:0x10000 key:0x184600 00:25:20.914 [2024-11-29 21:54:52.911125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.914 [2024-11-29 21:54:52.911142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2f200 len:0x10000 key:0x184600 00:25:20.914 [2024-11-29 21:54:52.911155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.914 [2024-11-29 21:54:52.911172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1f180 len:0x10000 key:0x184600 00:25:20.914 [2024-11-29 21:54:52.911185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.914 [2024-11-29 21:54:52.911202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0f100 len:0x10000 key:0x184600 00:25:20.915 [2024-11-29 21:54:52.911215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031f0000 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff80 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cff00 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfe80 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afe00 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fd80 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fd00 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fc80 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316fc00 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315fb80 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314fb00 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313fa80 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312fa00 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f980 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f900 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff880 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef800 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df780 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf700 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf680 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af600 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f580 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308f500 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307f480 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306f400 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.911985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305f380 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.911998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.912015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304f300 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.912027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.912044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303f280 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.912057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.912076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302f200 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.912089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.912106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301f180 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.912119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.912136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300f100 len:0x10000 key:0x184300 00:25:20.915 [2024-11-29 21:54:52.912148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.912165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033f0000 len:0x10000 key:0x184900 00:25:20.915 [2024-11-29 21:54:52.912178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.912195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff80 len:0x10000 key:0x184900 00:25:20.915 [2024-11-29 21:54:52.912208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.912225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cff00 len:0x10000 key:0x184900 00:25:20.915 [2024-11-29 21:54:52.912237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.915 [2024-11-29 21:54:52.912255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfe80 len:0x10000 key:0x184900 00:25:20.915 [2024-11-29 21:54:52.912267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afe00 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fd80 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fd00 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fc80 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336fc00 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335fb80 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334fb00 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333fa80 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332fa00 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f980 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f900 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff880 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef800 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df780 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf700 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf680 len:0x10000 key:0x184900 00:25:20.916 [2024-11-29 21:54:52.912758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.912775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf700 len:0x10000 key:0x184600 00:25:20.916 [2024-11-29 21:54:52.912788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58584 cdw0:67441000 sqhd:5f22 p:1 m:0 dnr:0 00:25:20.916 [2024-11-29 21:54:52.931432] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001a40ee80 was disconnected and freed. reset controller. 00:25:20.916 [2024-11-29 21:54:52.931484] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.916 [2024-11-29 21:54:52.931652] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.916 [2024-11-29 21:54:52.931717] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.916 [2024-11-29 21:54:52.931757] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.916 [2024-11-29 21:54:52.931799] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.916 [2024-11-29 21:54:52.931843] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.916 [2024-11-29 21:54:52.931883] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.916 [2024-11-29 21:54:52.931921] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.916 [2024-11-29 21:54:52.931934] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.916 [2024-11-29 21:54:52.931946] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.916 [2024-11-29 21:54:52.931958] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:20.916 [2024-11-29 21:54:52.933234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.916 [2024-11-29 21:54:52.933251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:20.916 [2024-11-29 21:54:52.933262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:20.916 [2024-11-29 21:54:52.933272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:20.916 [2024-11-29 21:54:52.933282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:20.916 [2024-11-29 21:54:52.939247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:20.916 [2024-11-29 21:54:52.939272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:20.916 [2024-11-29 21:54:52.939283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:20.916 [2024-11-29 21:54:52.939453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:20.916 task offset: 34944 on job bdev=Nvme1n1 fails 00:25:20.916 00:25:20.916 Latency(us) 00:25:20.916 [2024-11-29T20:54:53.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.916 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.916 Job: Nvme1n1 ended in about 1.85 seconds with error 00:25:20.916 Verification LBA range: start 0x0 length 0x400 00:25:20.916 Nvme1n1 : 1.85 138.18 8.64 34.54 0.00 366912.31 26528.97 1040187.39 00:25:20.916 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.916 Job: Nvme2n1 ended in about 1.85 seconds with error 00:25:20.916 Verification LBA range: start 0x0 length 0x400 00:25:20.916 Nvme2n1 : 1.85 138.13 8.63 34.53 0.00 363722.18 27472.69 1040187.39 00:25:20.916 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.916 Job: Nvme3n1 ended in about 1.85 seconds with error 00:25:20.916 Verification LBA range: start 0x0 length 0x400 00:25:20.916 Nvme3n1 : 1.85 138.08 8.63 34.52 0.00 360632.81 32086.43 1040187.39 00:25:20.916 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.916 Job: Nvme4n1 ended in about 1.85 seconds with error 00:25:20.916 Verification LBA range: start 0x0 length 0x400 00:25:20.916 Nvme4n1 : 1.85 154.21 9.64 34.51 0.00 326891.17 5164.24 1033476.51 00:25:20.916 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.916 Job: Nvme5n1 ended in about 1.86 seconds with error 00:25:20.916 Verification LBA range: start 0x0 length 0x400 00:25:20.916 Nvme5n1 : 1.86 144.46 9.03 34.50 0.00 341745.60 9279.90 1033476.51 00:25:20.916 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.916 Job: Nvme6n1 ended in about 1.86 seconds with error 00:25:20.916 Verification LBA range: start 0x0 length 0x400 00:25:20.916 Nvme6n1 : 1.86 146.57 9.16 34.49 0.00 334880.61 11953.77 1033476.51 00:25:20.916 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.916 Job: Nvme7n1 ended in about 1.86 seconds with error 00:25:20.916 Verification LBA range: start 0x0 length 0x400 00:25:20.916 Nvme7n1 : 1.86 146.52 9.16 34.47 0.00 332020.98 16986.93 1033476.51 00:25:20.916 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.916 Job: Nvme8n1 ended in about 1.86 seconds with error 00:25:20.916 Verification LBA range: start 0x0 length 0x400 00:25:20.916 Nvme8n1 : 1.86 138.93 8.68 34.46 0.00 343387.15 21705.52 1033476.51 00:25:20.916 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.916 Job: Nvme9n1 ended in about 1.86 seconds with error 00:25:20.916 Verification LBA range: start 0x0 length 0x400 00:25:20.916 Nvme9n1 : 1.86 137.81 8.61 34.45 0.00 342456.73 50751.08 1033476.51 00:25:20.916 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.916 Job: Nvme10n1 ended in about 1.83 seconds with error 00:25:20.916 Verification LBA range: start 0x0 length 0x400 00:25:20.916 Nvme10n1 : 1.83 104.79 6.55 34.93 0.00 420252.06 55364.81 1073741.82 00:25:20.916 [2024-11-29T20:54:53.165Z] =================================================================================================================== 00:25:20.917 [2024-11-29T20:54:53.165Z] Total : 1387.68 86.73 345.41 0.00 351462.41 5164.24 1073741.82 00:25:20.917 [2024-11-29 21:54:52.965377] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:20.917 [2024-11-29 21:54:52.965402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:20.917 [2024-11-29 21:54:52.980632] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:20.917 [2024-11-29 21:54:52.980721] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:20.917 [2024-11-29 21:54:52.980751] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aed000 00:25:20.917 [2024-11-29 21:54:52.980865] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:20.917 [2024-11-29 21:54:52.980899] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:20.917 [2024-11-29 21:54:52.980924] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019ae5280 00:25:20.917 [2024-11-29 21:54:52.981047] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:20.917 [2024-11-29 21:54:52.981088] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:20.917 [2024-11-29 21:54:52.981112] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aba2c0 00:25:20.917 [2024-11-29 21:54:52.981215] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:20.917 [2024-11-29 21:54:52.981230] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:20.917 [2024-11-29 21:54:52.981239] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019ab9ac0 00:25:20.917 [2024-11-29 21:54:52.984711] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:20.917 [2024-11-29 21:54:52.984765] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:20.917 [2024-11-29 21:54:52.984792] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019abd300 00:25:20.917 [2024-11-29 21:54:52.985052] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:20.917 [2024-11-29 21:54:52.985095] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:20.917 [2024-11-29 21:54:52.985119] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019a8dac0 00:25:20.917 [2024-11-29 21:54:52.985229] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:20.917 [2024-11-29 21:54:52.985263] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:20.917 [2024-11-29 21:54:52.985287] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019abf2c0 00:25:20.917 [2024-11-29 21:54:52.985398] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:20.917 [2024-11-29 21:54:52.985412] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:20.917 [2024-11-29 21:54:52.985422] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019adc080 00:25:20.917 [2024-11-29 21:54:52.985914] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:20.917 [2024-11-29 21:54:52.985933] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:20.917 [2024-11-29 21:54:52.985944] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019a8e300 00:25:20.917 [2024-11-29 21:54:52.986011] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:20.917 [2024-11-29 21:54:52.986025] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:20.917 [2024-11-29 21:54:52.986035] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019a8d2c0 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # kill -9 3128394 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@145 -- # stoptarget 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:21.176 rmmod nvme_rdma 00:25:21.176 rmmod nvme_fabrics 00:25:21.176 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 125: 3128394 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:25:21.176 00:25:21.176 real 0m4.557s 00:25:21.176 user 0m15.008s 00:25:21.176 sys 0m1.259s 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:21.176 ************************************ 00:25:21.176 END TEST nvmf_shutdown_tc3 00:25:21.176 ************************************ 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ mlx5 == \e\8\1\0 ]] 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@174 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:21.176 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:21.436 ************************************ 00:25:21.436 START TEST nvmf_shutdown_tc4 00:25:21.436 ************************************ 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # starttarget 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:21.436 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:21.436 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:21.436 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:21.437 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:21.437 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # rdma_device_init 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@526 -- # allocate_nic_ips 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:21.437 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:21.437 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:21.437 altname enp217s0f0np0 00:25:21.437 altname ens818f0np0 00:25:21.437 inet 192.168.100.8/24 scope global mlx_0_0 00:25:21.437 valid_lft forever preferred_lft forever 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:21.437 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:21.437 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:21.437 altname enp217s0f1np1 00:25:21.437 altname ens818f1np1 00:25:21.437 inet 192.168.100.9/24 scope global mlx_0_1 00:25:21.437 valid_lft forever preferred_lft forever 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:21.437 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:25:21.438 192.168.100.9' 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:25:21.438 192.168.100.9' 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # head -n 1 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:25:21.438 192.168.100.9' 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # tail -n +2 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # head -n 1 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=3129047 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 3129047 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3129047 ']' 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:21.438 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:21.696 [2024-11-29 21:54:53.728482] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:21.696 [2024-11-29 21:54:53.728533] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.696 [2024-11-29 21:54:53.799451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.696 [2024-11-29 21:54:53.838915] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.696 [2024-11-29 21:54:53.838957] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.696 [2024-11-29 21:54:53.838966] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.696 [2024-11-29 21:54:53.838974] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.696 [2024-11-29 21:54:53.838984] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.696 [2024-11-29 21:54:53.839087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.696 [2024-11-29 21:54:53.839192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.696 [2024-11-29 21:54:53.839303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.696 [2024-11-29 21:54:53.839304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:25:21.696 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:21.696 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:25:21.697 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:21.697 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:21.697 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:21.955 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.955 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:21.955 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.955 21:54:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:21.955 [2024-11-29 21:54:54.007536] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6d1250/0x6d5700) succeed. 00:25:21.955 [2024-11-29 21:54:54.018149] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6d2840/0x716da0) succeed. 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.955 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:22.213 Malloc1 00:25:22.214 [2024-11-29 21:54:54.239591] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:22.214 Malloc2 00:25:22.214 Malloc3 00:25:22.214 Malloc4 00:25:22.214 Malloc5 00:25:22.214 Malloc6 00:25:22.472 Malloc7 00:25:22.472 Malloc8 00:25:22.472 Malloc9 00:25:22.472 Malloc10 00:25:22.472 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.472 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:22.472 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:22.472 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:22.472 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@154 -- # perfpid=3129352 00:25:22.472 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # sleep 5 00:25:22.472 21:54:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@153 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:25:22.730 [2024-11-29 21:54:54.763636] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:28.008 21:54:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@157 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:28.008 21:54:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@160 -- # killprocess 3129047 00:25:28.008 21:54:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3129047 ']' 00:25:28.008 21:54:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3129047 00:25:28.008 21:54:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:25:28.008 21:54:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:28.008 21:54:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3129047 00:25:28.008 21:54:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:28.008 21:54:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:28.008 21:54:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3129047' 00:25:28.008 killing process with pid 3129047 00:25:28.008 21:54:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3129047 00:25:28.008 21:54:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3129047 00:25:28.008 NVMe io qpair process completion error 00:25:28.008 NVMe io qpair process completion error 00:25:28.008 NVMe io qpair process completion error 00:25:28.266 21:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@161 -- # nvmfpid= 00:25:28.266 21:55:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@164 -- # sleep 1 00:25:28.833 [2024-11-29 21:55:00.827111] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Submitting Keep Alive failed 00:25:28.833 [2024-11-29 21:55:00.831824] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:28.833 NVMe io qpair process completion error 00:25:28.833 [2024-11-29 21:55:00.832903] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Submitting Keep Alive failed 00:25:28.833 NVMe io qpair process completion error 00:25:28.833 NVMe io qpair process completion error 00:25:28.833 NVMe io qpair process completion error 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.833 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 NVMe io qpair process completion error 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.834 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 NVMe io qpair process completion error 00:25:28.835 NVMe io qpair process completion error 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:28.835 Write completed with error (sct=0, sc=8) 00:25:29.401 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # wait 3129352 00:25:29.662 Write completed with error (sct=0, sc=8) 00:25:29.662 Write completed with error (sct=0, sc=8) 00:25:29.662 Write completed with error (sct=0, sc=8) 00:25:29.662 Write completed with error (sct=0, sc=8) 00:25:29.662 Write completed with error (sct=0, sc=8) 00:25:29.662 Write completed with error (sct=0, sc=8) 00:25:29.662 Write completed with error (sct=0, sc=8) 00:25:29.662 Write completed with error (sct=0, sc=8) 00:25:29.662 Write completed with error (sct=0, sc=8) 00:25:29.662 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 [2024-11-29 21:55:01.838006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:29.663 [2024-11-29 21:55:01.838074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 [2024-11-29 21:55:01.846166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 [2024-11-29 21:55:01.846211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.663 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 [2024-11-29 21:55:01.857195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 [2024-11-29 21:55:01.857261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.664 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 [2024-11-29 21:55:01.869284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 [2024-11-29 21:55:01.869357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 [2024-11-29 21:55:01.881315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 [2024-11-29 21:55:01.881377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.665 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 [2024-11-29 21:55:01.883977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:29.666 [2024-11-29 21:55:01.884022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 [2024-11-29 21:55:01.893974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 [2024-11-29 21:55:01.894047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 [2024-11-29 21:55:01.896709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:29.666 [2024-11-29 21:55:01.896753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.666 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 [2024-11-29 21:55:01.905635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 [2024-11-29 21:55:01.905716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.667 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 Write completed with error (sct=0, sc=8) 00:25:29.926 [2024-11-29 21:55:01.944572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:29.926 [2024-11-29 21:55:01.944604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:29.926 Initializing NVMe Controllers 00:25:29.926 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:25:29.926 Controller IO queue size 128, less than required. 00:25:29.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.926 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:25:29.926 Controller IO queue size 128, less than required. 00:25:29.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.926 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:25:29.926 Controller IO queue size 128, less than required. 00:25:29.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.926 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:25:29.926 Controller IO queue size 128, less than required. 00:25:29.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.926 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:25:29.926 Controller IO queue size 128, less than required. 00:25:29.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.926 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:29.926 Controller IO queue size 128, less than required. 00:25:29.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.926 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:25:29.926 Controller IO queue size 128, less than required. 00:25:29.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.926 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:25:29.926 Controller IO queue size 128, less than required. 00:25:29.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.926 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:25:29.926 Controller IO queue size 128, less than required. 00:25:29.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.926 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:25:29.926 Controller IO queue size 128, less than required. 00:25:29.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.926 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:25:29.926 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:25:29.926 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:25:29.926 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:25:29.926 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:25:29.926 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:29.926 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:25:29.926 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:25:29.926 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:25:29.926 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:25:29.926 Initialization complete. Launching workers. 00:25:29.926 ======================================================== 00:25:29.926 Latency(us) 00:25:29.926 Device Information : IOPS MiB/s Average min max 00:25:29.926 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1565.53 67.27 82461.68 112.39 1216159.50 00:25:29.926 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1574.69 67.66 95078.94 114.14 2155219.37 00:25:29.926 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1576.72 67.75 95044.83 114.20 2146325.60 00:25:29.926 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1586.56 68.17 94555.09 117.21 2148572.57 00:25:29.926 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1593.85 68.49 94232.47 112.81 2140178.13 00:25:29.926 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1548.23 66.53 82659.53 117.31 1200503.70 00:25:29.926 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1633.20 70.18 92019.06 113.01 2003905.12 00:25:29.927 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1546.87 66.47 82823.47 113.64 1204483.82 00:25:29.927 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1566.21 67.30 96012.96 113.71 2187765.42 00:25:29.927 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1592.50 68.43 94500.65 115.63 2128497.28 00:25:29.927 ======================================================== 00:25:29.927 Total : 15784.37 678.23 90984.51 112.39 2187765.42 00:25:29.927 00:25:29.927 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:25:29.927 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # true 00:25:29.927 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@166 -- # stoptarget 00:25:29.927 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:29.927 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:29.927 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:29.927 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:29.927 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:29.927 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:25:29.927 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:29.927 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:29.927 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:25:29.927 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:29.927 21:55:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:29.927 rmmod nvme_rdma 00:25:29.927 rmmod nvme_fabrics 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:25:29.927 00:25:29.927 real 0m8.561s 00:25:29.927 user 0m32.101s 00:25:29.927 sys 0m1.286s 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:29.927 ************************************ 00:25:29.927 END TEST nvmf_shutdown_tc4 00:25:29.927 ************************************ 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@177 -- # trap - SIGINT SIGTERM EXIT 00:25:29.927 00:25:29.927 real 0m31.365s 00:25:29.927 user 1m36.080s 00:25:29.927 sys 0m9.927s 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:29.927 ************************************ 00:25:29.927 END TEST nvmf_shutdown 00:25:29.927 ************************************ 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:25:29.927 00:25:29.927 real 15m12.202s 00:25:29.927 user 47m1.830s 00:25:29.927 sys 3m8.007s 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:29.927 21:55:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:29.927 ************************************ 00:25:29.927 END TEST nvmf_target_extra 00:25:29.927 ************************************ 00:25:29.927 21:55:02 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:25:29.927 21:55:02 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:29.927 21:55:02 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:29.927 21:55:02 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:29.927 ************************************ 00:25:29.927 START TEST nvmf_host 00:25:29.927 ************************************ 00:25:29.927 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:25:30.185 * Looking for test storage... 00:25:30.185 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:30.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.185 --rc genhtml_branch_coverage=1 00:25:30.185 --rc genhtml_function_coverage=1 00:25:30.185 --rc genhtml_legend=1 00:25:30.185 --rc geninfo_all_blocks=1 00:25:30.185 --rc geninfo_unexecuted_blocks=1 00:25:30.185 00:25:30.185 ' 00:25:30.185 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:30.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.185 --rc genhtml_branch_coverage=1 00:25:30.185 --rc genhtml_function_coverage=1 00:25:30.186 --rc genhtml_legend=1 00:25:30.186 --rc geninfo_all_blocks=1 00:25:30.186 --rc geninfo_unexecuted_blocks=1 00:25:30.186 00:25:30.186 ' 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:30.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.186 --rc genhtml_branch_coverage=1 00:25:30.186 --rc genhtml_function_coverage=1 00:25:30.186 --rc genhtml_legend=1 00:25:30.186 --rc geninfo_all_blocks=1 00:25:30.186 --rc geninfo_unexecuted_blocks=1 00:25:30.186 00:25:30.186 ' 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:30.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.186 --rc genhtml_branch_coverage=1 00:25:30.186 --rc genhtml_function_coverage=1 00:25:30.186 --rc genhtml_legend=1 00:25:30.186 --rc geninfo_all_blocks=1 00:25:30.186 --rc geninfo_unexecuted_blocks=1 00:25:30.186 00:25:30.186 ' 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:30.186 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:30.186 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.445 ************************************ 00:25:30.445 START TEST nvmf_multicontroller 00:25:30.445 ************************************ 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:25:30.445 * Looking for test storage... 00:25:30.445 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:30.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.445 --rc genhtml_branch_coverage=1 00:25:30.445 --rc genhtml_function_coverage=1 00:25:30.445 --rc genhtml_legend=1 00:25:30.445 --rc geninfo_all_blocks=1 00:25:30.445 --rc geninfo_unexecuted_blocks=1 00:25:30.445 00:25:30.445 ' 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:30.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.445 --rc genhtml_branch_coverage=1 00:25:30.445 --rc genhtml_function_coverage=1 00:25:30.445 --rc genhtml_legend=1 00:25:30.445 --rc geninfo_all_blocks=1 00:25:30.445 --rc geninfo_unexecuted_blocks=1 00:25:30.445 00:25:30.445 ' 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:30.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.445 --rc genhtml_branch_coverage=1 00:25:30.445 --rc genhtml_function_coverage=1 00:25:30.445 --rc genhtml_legend=1 00:25:30.445 --rc geninfo_all_blocks=1 00:25:30.445 --rc geninfo_unexecuted_blocks=1 00:25:30.445 00:25:30.445 ' 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:30.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.445 --rc genhtml_branch_coverage=1 00:25:30.445 --rc genhtml_function_coverage=1 00:25:30.445 --rc genhtml_legend=1 00:25:30.445 --rc geninfo_all_blocks=1 00:25:30.445 --rc geninfo_unexecuted_blocks=1 00:25:30.445 00:25:30.445 ' 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.445 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:30.446 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:25:30.446 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:25:30.446 00:25:30.446 real 0m0.229s 00:25:30.446 user 0m0.128s 00:25:30.446 sys 0m0.114s 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:30.446 21:55:02 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:30.446 ************************************ 00:25:30.446 END TEST nvmf_multicontroller 00:25:30.446 ************************************ 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.704 ************************************ 00:25:30.704 START TEST nvmf_aer 00:25:30.704 ************************************ 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:25:30.704 * Looking for test storage... 00:25:30.704 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.704 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:30.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.705 --rc genhtml_branch_coverage=1 00:25:30.705 --rc genhtml_function_coverage=1 00:25:30.705 --rc genhtml_legend=1 00:25:30.705 --rc geninfo_all_blocks=1 00:25:30.705 --rc geninfo_unexecuted_blocks=1 00:25:30.705 00:25:30.705 ' 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:30.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.705 --rc genhtml_branch_coverage=1 00:25:30.705 --rc genhtml_function_coverage=1 00:25:30.705 --rc genhtml_legend=1 00:25:30.705 --rc geninfo_all_blocks=1 00:25:30.705 --rc geninfo_unexecuted_blocks=1 00:25:30.705 00:25:30.705 ' 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:30.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.705 --rc genhtml_branch_coverage=1 00:25:30.705 --rc genhtml_function_coverage=1 00:25:30.705 --rc genhtml_legend=1 00:25:30.705 --rc geninfo_all_blocks=1 00:25:30.705 --rc geninfo_unexecuted_blocks=1 00:25:30.705 00:25:30.705 ' 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:30.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.705 --rc genhtml_branch_coverage=1 00:25:30.705 --rc genhtml_function_coverage=1 00:25:30.705 --rc genhtml_legend=1 00:25:30.705 --rc geninfo_all_blocks=1 00:25:30.705 --rc geninfo_unexecuted_blocks=1 00:25:30.705 00:25:30.705 ' 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.705 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:30.964 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:30.964 21:55:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:37.658 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.658 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.658 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.658 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.658 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:37.659 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:37.659 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:37.659 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:37.659 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # rdma_device_init 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@526 -- # allocate_nic_ips 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:37.659 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:37.659 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:37.659 altname enp217s0f0np0 00:25:37.659 altname ens818f0np0 00:25:37.659 inet 192.168.100.8/24 scope global mlx_0_0 00:25:37.659 valid_lft forever preferred_lft forever 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:37.659 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:37.660 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:37.660 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:37.660 altname enp217s0f1np1 00:25:37.660 altname ens818f1np1 00:25:37.660 inet 192.168.100.9/24 scope global mlx_0_1 00:25:37.660 valid_lft forever preferred_lft forever 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:25:37.660 192.168.100.9' 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:25:37.660 192.168.100.9' 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # head -n 1 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:25:37.660 192.168.100.9' 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # head -n 1 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # tail -n +2 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=3133990 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 3133990 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3133990 ']' 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:37.660 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:37.660 [2024-11-29 21:55:09.765257] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:37.660 [2024-11-29 21:55:09.765314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.660 [2024-11-29 21:55:09.835961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:37.660 [2024-11-29 21:55:09.877122] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.660 [2024-11-29 21:55:09.877164] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.660 [2024-11-29 21:55:09.877172] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.660 [2024-11-29 21:55:09.877180] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.660 [2024-11-29 21:55:09.877187] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.660 [2024-11-29 21:55:09.877282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.660 [2024-11-29 21:55:09.877395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:37.660 [2024-11-29 21:55:09.877481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:37.660 [2024-11-29 21:55:09.877483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.919 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:37.919 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:25:37.919 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:37.919 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.919 21:55:09 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:37.919 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.919 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:37.919 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.919 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:37.919 [2024-11-29 21:55:10.065499] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15e8f50/0x15ed400) succeed. 00:25:37.919 [2024-11-29 21:55:10.076016] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15ea540/0x162eaa0) succeed. 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:38.178 Malloc0 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:38.178 [2024-11-29 21:55:10.242711] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.178 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:38.178 [ 00:25:38.178 { 00:25:38.178 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:38.178 "subtype": "Discovery", 00:25:38.178 "listen_addresses": [], 00:25:38.178 "allow_any_host": true, 00:25:38.178 "hosts": [] 00:25:38.178 }, 00:25:38.178 { 00:25:38.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.178 "subtype": "NVMe", 00:25:38.178 "listen_addresses": [ 00:25:38.178 { 00:25:38.178 "trtype": "RDMA", 00:25:38.178 "adrfam": "IPv4", 00:25:38.178 "traddr": "192.168.100.8", 00:25:38.178 "trsvcid": "4420" 00:25:38.178 } 00:25:38.178 ], 00:25:38.178 "allow_any_host": true, 00:25:38.178 "hosts": [], 00:25:38.178 "serial_number": "SPDK00000000000001", 00:25:38.178 "model_number": "SPDK bdev Controller", 00:25:38.178 "max_namespaces": 2, 00:25:38.178 "min_cntlid": 1, 00:25:38.178 "max_cntlid": 65519, 00:25:38.178 "namespaces": [ 00:25:38.178 { 00:25:38.178 "nsid": 1, 00:25:38.178 "bdev_name": "Malloc0", 00:25:38.178 "name": "Malloc0", 00:25:38.178 "nguid": "9F308F628A654D4C88AE38C2CFD28AA0", 00:25:38.178 "uuid": "9f308f62-8a65-4d4c-88ae-38c2cfd28aa0" 00:25:38.178 } 00:25:38.178 ] 00:25:38.178 } 00:25:38.178 ] 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3134075 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:25:38.179 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:38.438 Malloc1 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:38.438 [ 00:25:38.438 { 00:25:38.438 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:38.438 "subtype": "Discovery", 00:25:38.438 "listen_addresses": [], 00:25:38.438 "allow_any_host": true, 00:25:38.438 "hosts": [] 00:25:38.438 }, 00:25:38.438 { 00:25:38.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.438 "subtype": "NVMe", 00:25:38.438 "listen_addresses": [ 00:25:38.438 { 00:25:38.438 "trtype": "RDMA", 00:25:38.438 "adrfam": "IPv4", 00:25:38.438 "traddr": "192.168.100.8", 00:25:38.438 "trsvcid": "4420" 00:25:38.438 } 00:25:38.438 ], 00:25:38.438 "allow_any_host": true, 00:25:38.438 "hosts": [], 00:25:38.438 "serial_number": "SPDK00000000000001", 00:25:38.438 "model_number": "SPDK bdev Controller", 00:25:38.438 "max_namespaces": 2, 00:25:38.438 "min_cntlid": 1, 00:25:38.438 "max_cntlid": 65519, 00:25:38.438 "namespaces": [ 00:25:38.438 { 00:25:38.438 "nsid": 1, 00:25:38.438 "bdev_name": "Malloc0", 00:25:38.438 "name": "Malloc0", 00:25:38.438 "nguid": "9F308F628A654D4C88AE38C2CFD28AA0", 00:25:38.438 "uuid": "9f308f62-8a65-4d4c-88ae-38c2cfd28aa0" 00:25:38.438 }, 00:25:38.438 { 00:25:38.438 "nsid": 2, 00:25:38.438 "bdev_name": "Malloc1", 00:25:38.438 "name": "Malloc1", 00:25:38.438 "nguid": "C34DE97AF1094633B859F3CA1838C3F4", 00:25:38.438 "uuid": "c34de97a-f109-4633-b859-f3ca1838c3f4" 00:25:38.438 } 00:25:38.438 ] 00:25:38.438 } 00:25:38.438 ] 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3134075 00:25:38.438 Asynchronous Event Request test 00:25:38.438 Attaching to 192.168.100.8 00:25:38.438 Attached to 192.168.100.8 00:25:38.438 Registering asynchronous event callbacks... 00:25:38.438 Starting namespace attribute notice tests for all controllers... 00:25:38.438 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:38.438 aer_cb - Changed Namespace 00:25:38.438 Cleaning up... 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:38.438 rmmod nvme_rdma 00:25:38.438 rmmod nvme_fabrics 00:25:38.438 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 3133990 ']' 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 3133990 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3133990 ']' 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3133990 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3133990 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3133990' 00:25:38.697 killing process with pid 3133990 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3133990 00:25:38.697 21:55:10 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3133990 00:25:38.956 21:55:11 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:38.956 21:55:11 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:25:38.956 00:25:38.956 real 0m8.270s 00:25:38.956 user 0m6.245s 00:25:38.956 sys 0m5.741s 00:25:38.956 21:55:11 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:38.956 21:55:11 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:38.956 ************************************ 00:25:38.956 END TEST nvmf_aer 00:25:38.956 ************************************ 00:25:38.956 21:55:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:25:38.956 21:55:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:38.956 21:55:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:38.956 21:55:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.956 ************************************ 00:25:38.956 START TEST nvmf_async_init 00:25:38.956 ************************************ 00:25:38.956 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:25:38.956 * Looking for test storage... 00:25:38.956 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:38.956 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:38.956 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:25:38.956 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.216 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:39.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.217 --rc genhtml_branch_coverage=1 00:25:39.217 --rc genhtml_function_coverage=1 00:25:39.217 --rc genhtml_legend=1 00:25:39.217 --rc geninfo_all_blocks=1 00:25:39.217 --rc geninfo_unexecuted_blocks=1 00:25:39.217 00:25:39.217 ' 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:39.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.217 --rc genhtml_branch_coverage=1 00:25:39.217 --rc genhtml_function_coverage=1 00:25:39.217 --rc genhtml_legend=1 00:25:39.217 --rc geninfo_all_blocks=1 00:25:39.217 --rc geninfo_unexecuted_blocks=1 00:25:39.217 00:25:39.217 ' 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:39.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.217 --rc genhtml_branch_coverage=1 00:25:39.217 --rc genhtml_function_coverage=1 00:25:39.217 --rc genhtml_legend=1 00:25:39.217 --rc geninfo_all_blocks=1 00:25:39.217 --rc geninfo_unexecuted_blocks=1 00:25:39.217 00:25:39.217 ' 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:39.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.217 --rc genhtml_branch_coverage=1 00:25:39.217 --rc genhtml_function_coverage=1 00:25:39.217 --rc genhtml_legend=1 00:25:39.217 --rc geninfo_all_blocks=1 00:25:39.217 --rc geninfo_unexecuted_blocks=1 00:25:39.217 00:25:39.217 ' 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:39.217 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=eade936aba664c3a9ee985ad8cdf2231 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:39.217 21:55:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:47.335 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:47.335 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:47.335 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:47.336 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:47.336 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # rdma_device_init 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@526 -- # allocate_nic_ips 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:47.336 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:47.336 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:47.336 altname enp217s0f0np0 00:25:47.336 altname ens818f0np0 00:25:47.336 inet 192.168.100.8/24 scope global mlx_0_0 00:25:47.336 valid_lft forever preferred_lft forever 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:47.336 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:47.336 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:47.336 altname enp217s0f1np1 00:25:47.336 altname ens818f1np1 00:25:47.336 inet 192.168.100.9/24 scope global mlx_0_1 00:25:47.336 valid_lft forever preferred_lft forever 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:47.336 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:25:47.337 192.168.100.9' 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:25:47.337 192.168.100.9' 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # head -n 1 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:25:47.337 192.168.100.9' 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # tail -n +2 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # head -n 1 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=3137511 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 3137511 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3137511 ']' 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.337 [2024-11-29 21:55:18.443875] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:47.337 [2024-11-29 21:55:18.443937] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.337 [2024-11-29 21:55:18.514942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.337 [2024-11-29 21:55:18.553758] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.337 [2024-11-29 21:55:18.553799] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.337 [2024-11-29 21:55:18.553809] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.337 [2024-11-29 21:55:18.553817] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.337 [2024-11-29 21:55:18.553824] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.337 [2024-11-29 21:55:18.553846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.337 [2024-11-29 21:55:18.714330] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x117df70/0x1182420) succeed. 00:25:47.337 [2024-11-29 21:55:18.723307] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x117f420/0x11c3ac0) succeed. 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.337 null0 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g eade936aba664c3a9ee985ad8cdf2231 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.337 [2024-11-29 21:55:18.805377] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.337 nvme0n1 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.337 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.337 [ 00:25:47.337 { 00:25:47.337 "name": "nvme0n1", 00:25:47.337 "aliases": [ 00:25:47.337 "eade936a-ba66-4c3a-9ee9-85ad8cdf2231" 00:25:47.337 ], 00:25:47.337 "product_name": "NVMe disk", 00:25:47.337 "block_size": 512, 00:25:47.337 "num_blocks": 2097152, 00:25:47.337 "uuid": "eade936a-ba66-4c3a-9ee9-85ad8cdf2231", 00:25:47.337 "numa_id": 1, 00:25:47.337 "assigned_rate_limits": { 00:25:47.337 "rw_ios_per_sec": 0, 00:25:47.337 "rw_mbytes_per_sec": 0, 00:25:47.337 "r_mbytes_per_sec": 0, 00:25:47.337 "w_mbytes_per_sec": 0 00:25:47.337 }, 00:25:47.337 "claimed": false, 00:25:47.337 "zoned": false, 00:25:47.337 "supported_io_types": { 00:25:47.337 "read": true, 00:25:47.337 "write": true, 00:25:47.337 "unmap": false, 00:25:47.337 "flush": true, 00:25:47.337 "reset": true, 00:25:47.337 "nvme_admin": true, 00:25:47.337 "nvme_io": true, 00:25:47.337 "nvme_io_md": false, 00:25:47.337 "write_zeroes": true, 00:25:47.337 "zcopy": false, 00:25:47.337 "get_zone_info": false, 00:25:47.337 "zone_management": false, 00:25:47.337 "zone_append": false, 00:25:47.337 "compare": true, 00:25:47.337 "compare_and_write": true, 00:25:47.337 "abort": true, 00:25:47.337 "seek_hole": false, 00:25:47.337 "seek_data": false, 00:25:47.337 "copy": true, 00:25:47.337 "nvme_iov_md": false 00:25:47.337 }, 00:25:47.337 "memory_domains": [ 00:25:47.337 { 00:25:47.337 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:25:47.337 "dma_device_type": 0 00:25:47.337 } 00:25:47.337 ], 00:25:47.337 "driver_specific": { 00:25:47.337 "nvme": [ 00:25:47.337 { 00:25:47.337 "trid": { 00:25:47.337 "trtype": "RDMA", 00:25:47.337 "adrfam": "IPv4", 00:25:47.337 "traddr": "192.168.100.8", 00:25:47.337 "trsvcid": "4420", 00:25:47.337 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:47.337 }, 00:25:47.337 "ctrlr_data": { 00:25:47.337 "cntlid": 1, 00:25:47.337 "vendor_id": "0x8086", 00:25:47.337 "model_number": "SPDK bdev Controller", 00:25:47.337 "serial_number": "00000000000000000000", 00:25:47.337 "firmware_revision": "24.09.1", 00:25:47.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:47.337 "oacs": { 00:25:47.337 "security": 0, 00:25:47.337 "format": 0, 00:25:47.337 "firmware": 0, 00:25:47.337 "ns_manage": 0 00:25:47.337 }, 00:25:47.337 "multi_ctrlr": true, 00:25:47.337 "ana_reporting": false 00:25:47.337 }, 00:25:47.337 "vs": { 00:25:47.338 "nvme_version": "1.3" 00:25:47.338 }, 00:25:47.338 "ns_data": { 00:25:47.338 "id": 1, 00:25:47.338 "can_share": true 00:25:47.338 } 00:25:47.338 } 00:25:47.338 ], 00:25:47.338 "mp_policy": "active_passive" 00:25:47.338 } 00:25:47.338 } 00:25:47.338 ] 00:25:47.338 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.338 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:47.338 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.338 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.338 [2024-11-29 21:55:18.911654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:47.338 [2024-11-29 21:55:18.929057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:47.338 [2024-11-29 21:55:18.950937] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:47.338 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.338 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:47.338 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.338 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.338 [ 00:25:47.338 { 00:25:47.338 "name": "nvme0n1", 00:25:47.338 "aliases": [ 00:25:47.338 "eade936a-ba66-4c3a-9ee9-85ad8cdf2231" 00:25:47.338 ], 00:25:47.338 "product_name": "NVMe disk", 00:25:47.338 "block_size": 512, 00:25:47.338 "num_blocks": 2097152, 00:25:47.338 "uuid": "eade936a-ba66-4c3a-9ee9-85ad8cdf2231", 00:25:47.338 "numa_id": 1, 00:25:47.338 "assigned_rate_limits": { 00:25:47.338 "rw_ios_per_sec": 0, 00:25:47.338 "rw_mbytes_per_sec": 0, 00:25:47.338 "r_mbytes_per_sec": 0, 00:25:47.338 "w_mbytes_per_sec": 0 00:25:47.338 }, 00:25:47.338 "claimed": false, 00:25:47.338 "zoned": false, 00:25:47.338 "supported_io_types": { 00:25:47.338 "read": true, 00:25:47.338 "write": true, 00:25:47.338 "unmap": false, 00:25:47.338 "flush": true, 00:25:47.338 "reset": true, 00:25:47.338 "nvme_admin": true, 00:25:47.338 "nvme_io": true, 00:25:47.338 "nvme_io_md": false, 00:25:47.338 "write_zeroes": true, 00:25:47.338 "zcopy": false, 00:25:47.338 "get_zone_info": false, 00:25:47.338 "zone_management": false, 00:25:47.338 "zone_append": false, 00:25:47.338 "compare": true, 00:25:47.338 "compare_and_write": true, 00:25:47.338 "abort": true, 00:25:47.338 "seek_hole": false, 00:25:47.338 "seek_data": false, 00:25:47.338 "copy": true, 00:25:47.338 "nvme_iov_md": false 00:25:47.338 }, 00:25:47.338 "memory_domains": [ 00:25:47.338 { 00:25:47.338 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:25:47.338 "dma_device_type": 0 00:25:47.338 } 00:25:47.338 ], 00:25:47.338 "driver_specific": { 00:25:47.338 "nvme": [ 00:25:47.338 { 00:25:47.338 "trid": { 00:25:47.338 "trtype": "RDMA", 00:25:47.338 "adrfam": "IPv4", 00:25:47.338 "traddr": "192.168.100.8", 00:25:47.338 "trsvcid": "4420", 00:25:47.338 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:47.338 }, 00:25:47.338 "ctrlr_data": { 00:25:47.338 "cntlid": 2, 00:25:47.338 "vendor_id": "0x8086", 00:25:47.338 "model_number": "SPDK bdev Controller", 00:25:47.338 "serial_number": "00000000000000000000", 00:25:47.338 "firmware_revision": "24.09.1", 00:25:47.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:47.338 "oacs": { 00:25:47.338 "security": 0, 00:25:47.338 "format": 0, 00:25:47.338 "firmware": 0, 00:25:47.338 "ns_manage": 0 00:25:47.338 }, 00:25:47.338 "multi_ctrlr": true, 00:25:47.338 "ana_reporting": false 00:25:47.338 }, 00:25:47.338 "vs": { 00:25:47.338 "nvme_version": "1.3" 00:25:47.338 }, 00:25:47.338 "ns_data": { 00:25:47.338 "id": 1, 00:25:47.338 "can_share": true 00:25:47.338 } 00:25:47.338 } 00:25:47.338 ], 00:25:47.338 "mp_policy": "active_passive" 00:25:47.338 } 00:25:47.338 } 00:25:47.338 ] 00:25:47.338 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.338 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.338 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.338 21:55:18 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.7gL6TQYqS1 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.7gL6TQYqS1 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.7gL6TQYqS1 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.338 [2024-11-29 21:55:19.037672] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.338 [2024-11-29 21:55:19.053718] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:47.338 nvme0n1 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.338 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.338 [ 00:25:47.338 { 00:25:47.338 "name": "nvme0n1", 00:25:47.338 "aliases": [ 00:25:47.338 "eade936a-ba66-4c3a-9ee9-85ad8cdf2231" 00:25:47.338 ], 00:25:47.338 "product_name": "NVMe disk", 00:25:47.338 "block_size": 512, 00:25:47.338 "num_blocks": 2097152, 00:25:47.338 "uuid": "eade936a-ba66-4c3a-9ee9-85ad8cdf2231", 00:25:47.338 "numa_id": 1, 00:25:47.338 "assigned_rate_limits": { 00:25:47.338 "rw_ios_per_sec": 0, 00:25:47.338 "rw_mbytes_per_sec": 0, 00:25:47.338 "r_mbytes_per_sec": 0, 00:25:47.338 "w_mbytes_per_sec": 0 00:25:47.338 }, 00:25:47.338 "claimed": false, 00:25:47.338 "zoned": false, 00:25:47.338 "supported_io_types": { 00:25:47.338 "read": true, 00:25:47.338 "write": true, 00:25:47.338 "unmap": false, 00:25:47.338 "flush": true, 00:25:47.338 "reset": true, 00:25:47.338 "nvme_admin": true, 00:25:47.338 "nvme_io": true, 00:25:47.338 "nvme_io_md": false, 00:25:47.338 "write_zeroes": true, 00:25:47.338 "zcopy": false, 00:25:47.338 "get_zone_info": false, 00:25:47.338 "zone_management": false, 00:25:47.338 "zone_append": false, 00:25:47.338 "compare": true, 00:25:47.338 "compare_and_write": true, 00:25:47.338 "abort": true, 00:25:47.338 "seek_hole": false, 00:25:47.338 "seek_data": false, 00:25:47.338 "copy": true, 00:25:47.338 "nvme_iov_md": false 00:25:47.338 }, 00:25:47.338 "memory_domains": [ 00:25:47.338 { 00:25:47.338 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:25:47.338 "dma_device_type": 0 00:25:47.338 } 00:25:47.338 ], 00:25:47.338 "driver_specific": { 00:25:47.338 "nvme": [ 00:25:47.338 { 00:25:47.338 "trid": { 00:25:47.338 "trtype": "RDMA", 00:25:47.338 "adrfam": "IPv4", 00:25:47.338 "traddr": "192.168.100.8", 00:25:47.338 "trsvcid": "4421", 00:25:47.338 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:47.338 }, 00:25:47.338 "ctrlr_data": { 00:25:47.338 "cntlid": 3, 00:25:47.338 "vendor_id": "0x8086", 00:25:47.338 "model_number": "SPDK bdev Controller", 00:25:47.338 "serial_number": "00000000000000000000", 00:25:47.338 "firmware_revision": "24.09.1", 00:25:47.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:47.338 "oacs": { 00:25:47.338 "security": 0, 00:25:47.338 "format": 0, 00:25:47.338 "firmware": 0, 00:25:47.338 "ns_manage": 0 00:25:47.339 }, 00:25:47.339 "multi_ctrlr": true, 00:25:47.339 "ana_reporting": false 00:25:47.339 }, 00:25:47.339 "vs": { 00:25:47.339 "nvme_version": "1.3" 00:25:47.339 }, 00:25:47.339 "ns_data": { 00:25:47.339 "id": 1, 00:25:47.339 "can_share": true 00:25:47.339 } 00:25:47.339 } 00:25:47.339 ], 00:25:47.339 "mp_policy": "active_passive" 00:25:47.339 } 00:25:47.339 } 00:25:47.339 ] 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.7gL6TQYqS1 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:47.339 rmmod nvme_rdma 00:25:47.339 rmmod nvme_fabrics 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 3137511 ']' 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 3137511 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3137511 ']' 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3137511 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3137511 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3137511' 00:25:47.339 killing process with pid 3137511 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3137511 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3137511 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:25:47.339 00:25:47.339 real 0m8.406s 00:25:47.339 user 0m3.142s 00:25:47.339 sys 0m5.874s 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:47.339 ************************************ 00:25:47.339 END TEST nvmf_async_init 00:25:47.339 ************************************ 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.339 ************************************ 00:25:47.339 START TEST dma 00:25:47.339 ************************************ 00:25:47.339 21:55:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:25:47.599 * Looking for test storage... 00:25:47.599 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:47.599 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.600 --rc genhtml_branch_coverage=1 00:25:47.600 --rc genhtml_function_coverage=1 00:25:47.600 --rc genhtml_legend=1 00:25:47.600 --rc geninfo_all_blocks=1 00:25:47.600 --rc geninfo_unexecuted_blocks=1 00:25:47.600 00:25:47.600 ' 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.600 --rc genhtml_branch_coverage=1 00:25:47.600 --rc genhtml_function_coverage=1 00:25:47.600 --rc genhtml_legend=1 00:25:47.600 --rc geninfo_all_blocks=1 00:25:47.600 --rc geninfo_unexecuted_blocks=1 00:25:47.600 00:25:47.600 ' 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.600 --rc genhtml_branch_coverage=1 00:25:47.600 --rc genhtml_function_coverage=1 00:25:47.600 --rc genhtml_legend=1 00:25:47.600 --rc geninfo_all_blocks=1 00:25:47.600 --rc geninfo_unexecuted_blocks=1 00:25:47.600 00:25:47.600 ' 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.600 --rc genhtml_branch_coverage=1 00:25:47.600 --rc genhtml_function_coverage=1 00:25:47.600 --rc genhtml_legend=1 00:25:47.600 --rc geninfo_all_blocks=1 00:25:47.600 --rc geninfo_unexecuted_blocks=1 00:25:47.600 00:25:47.600 ' 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.600 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:47.600 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:47.601 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.601 21:55:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.601 21:55:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.601 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:47.601 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:47.601 21:55:19 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.601 21:55:19 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:54.165 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:54.165 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:54.165 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:54.165 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.165 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # is_hw=yes 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # rdma_device_init 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@526 -- # allocate_nic_ips 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:54.166 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:54.166 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:54.166 altname enp217s0f0np0 00:25:54.166 altname ens818f0np0 00:25:54.166 inet 192.168.100.8/24 scope global mlx_0_0 00:25:54.166 valid_lft forever preferred_lft forever 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:54.166 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:54.166 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:54.166 altname enp217s0f1np1 00:25:54.166 altname ens818f1np1 00:25:54.166 inet 192.168.100.9/24 scope global mlx_0_1 00:25:54.166 valid_lft forever preferred_lft forever 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@446 -- # return 0 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:54.166 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:25:54.167 192.168.100.9' 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # head -n 1 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:25:54.167 192.168.100.9' 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:25:54.167 192.168.100.9' 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # tail -n +2 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # head -n 1 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@505 -- # nvmfpid=3140944 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@506 -- # waitforlisten 3140944 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@831 -- # '[' -z 3140944 ']' 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:54.167 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:54.425 [2024-11-29 21:55:26.439574] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:54.425 [2024-11-29 21:55:26.439625] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.425 [2024-11-29 21:55:26.509002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:54.425 [2024-11-29 21:55:26.547848] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.425 [2024-11-29 21:55:26.547890] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.425 [2024-11-29 21:55:26.547900] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.425 [2024-11-29 21:55:26.547909] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.425 [2024-11-29 21:55:26.547915] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.425 [2024-11-29 21:55:26.547961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.425 [2024-11-29 21:55:26.547963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.425 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:54.425 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # return 0 00:25:54.425 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:54.425 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:54.425 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:54.683 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.683 21:55:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:54.683 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:54.684 [2024-11-29 21:55:26.706165] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5eb910/0x5efdc0) succeed. 00:25:54.684 [2024-11-29 21:55:26.715174] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5ecdc0/0x631460) succeed. 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:54.684 Malloc0 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:54.684 [2024-11-29 21:55:26.876571] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@556 -- # config=() 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@556 -- # local subsystem config 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:54.684 { 00:25:54.684 "params": { 00:25:54.684 "name": "Nvme$subsystem", 00:25:54.684 "trtype": "$TEST_TRANSPORT", 00:25:54.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.684 "adrfam": "ipv4", 00:25:54.684 "trsvcid": "$NVMF_PORT", 00:25:54.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.684 "hdgst": ${hdgst:-false}, 00:25:54.684 "ddgst": ${ddgst:-false} 00:25:54.684 }, 00:25:54.684 "method": "bdev_nvme_attach_controller" 00:25:54.684 } 00:25:54.684 EOF 00:25:54.684 )") 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@578 -- # cat 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@580 -- # jq . 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@581 -- # IFS=, 00:25:54.684 21:55:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:25:54.684 "params": { 00:25:54.684 "name": "Nvme0", 00:25:54.684 "trtype": "rdma", 00:25:54.684 "traddr": "192.168.100.8", 00:25:54.684 "adrfam": "ipv4", 00:25:54.684 "trsvcid": "4420", 00:25:54.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:54.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:54.684 "hdgst": false, 00:25:54.684 "ddgst": false 00:25:54.684 }, 00:25:54.684 "method": "bdev_nvme_attach_controller" 00:25:54.684 }' 00:25:54.684 [2024-11-29 21:55:26.928328] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:54.684 [2024-11-29 21:55:26.928375] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3141175 ] 00:25:54.943 [2024-11-29 21:55:26.995445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:54.943 [2024-11-29 21:55:27.034839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:54.943 [2024-11-29 21:55:27.034842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.208 bdev Nvme0n1 reports 1 memory domains 00:26:00.208 bdev Nvme0n1 supports RDMA memory domain 00:26:00.208 Initialization complete, running randrw IO for 5 sec on 2 cores 00:26:00.208 ========================================================================== 00:26:00.208 Latency [us] 00:26:00.208 IOPS MiB/s Average min max 00:26:00.208 Core 2: 21513.29 84.04 743.09 244.97 8401.80 00:26:00.208 Core 3: 21674.08 84.66 737.54 237.01 8291.67 00:26:00.208 ========================================================================== 00:26:00.208 Total : 43187.37 168.70 740.31 237.01 8401.80 00:26:00.208 00:26:00.208 Total operations: 215952, translate 215952 pull_push 0 memzero 0 00:26:00.208 21:55:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:26:00.208 21:55:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:26:00.208 21:55:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:26:00.466 [2024-11-29 21:55:32.472023] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:00.466 [2024-11-29 21:55:32.472083] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3141985 ] 00:26:00.466 [2024-11-29 21:55:32.538024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:00.466 [2024-11-29 21:55:32.574995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:00.466 [2024-11-29 21:55:32.574997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.729 bdev Malloc0 reports 2 memory domains 00:26:05.729 bdev Malloc0 doesn't support RDMA memory domain 00:26:05.729 Initialization complete, running randrw IO for 5 sec on 2 cores 00:26:05.729 ========================================================================== 00:26:05.730 Latency [us] 00:26:05.730 IOPS MiB/s Average min max 00:26:05.730 Core 2: 14410.30 56.29 1109.61 464.82 1405.90 00:26:05.730 Core 3: 14667.62 57.30 1090.13 452.53 2013.98 00:26:05.730 ========================================================================== 00:26:05.730 Total : 29077.92 113.59 1099.78 452.53 2013.98 00:26:05.730 00:26:05.730 Total operations: 145439, translate 0 pull_push 581756 memzero 0 00:26:05.730 21:55:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:26:05.730 21:55:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:26:05.730 21:55:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:26:05.730 21:55:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:26:05.730 Ignoring -M option 00:26:05.730 [2024-11-29 21:55:37.921133] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:05.730 [2024-11-29 21:55:37.921192] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3142934 ] 00:26:05.988 [2024-11-29 21:55:37.990090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:05.988 [2024-11-29 21:55:38.027418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:05.988 [2024-11-29 21:55:38.027420] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.251 bdev 0f04c669-4b70-41b4-a06f-68019c89f329 reports 1 memory domains 00:26:11.251 bdev 0f04c669-4b70-41b4-a06f-68019c89f329 supports RDMA memory domain 00:26:11.251 Initialization complete, running randread IO for 5 sec on 2 cores 00:26:11.251 ========================================================================== 00:26:11.251 Latency [us] 00:26:11.251 IOPS MiB/s Average min max 00:26:11.251 Core 2: 68198.91 266.40 233.67 88.37 1741.18 00:26:11.251 Core 3: 68575.67 267.87 232.36 84.95 1802.51 00:26:11.251 ========================================================================== 00:26:11.251 Total : 136774.58 534.28 233.01 84.95 1802.51 00:26:11.251 00:26:11.251 Total operations: 683948, translate 0 pull_push 0 memzero 683948 00:26:11.251 21:55:43 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:26:11.509 [2024-11-29 21:55:43.585249] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:14.037 Initializing NVMe Controllers 00:26:14.037 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:26:14.037 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:14.037 Initialization complete. Launching workers. 00:26:14.037 ======================================================== 00:26:14.037 Latency(us) 00:26:14.037 Device Information : IOPS MiB/s Average min max 00:26:14.037 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2012.75 7.86 7980.04 7804.60 8157.11 00:26:14.037 ======================================================== 00:26:14.037 Total : 2012.75 7.86 7980.04 7804.60 8157.11 00:26:14.037 00:26:14.037 21:55:45 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:26:14.037 21:55:45 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:26:14.037 21:55:45 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:26:14.037 21:55:45 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:26:14.037 [2024-11-29 21:55:45.922378] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:14.037 [2024-11-29 21:55:45.922426] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144246 ] 00:26:14.037 [2024-11-29 21:55:45.990286] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:14.037 [2024-11-29 21:55:46.029922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:14.037 [2024-11-29 21:55:46.029924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.300 bdev 778cfdbb-6781-4a32-8f35-3bcbb4f45036 reports 1 memory domains 00:26:19.300 bdev 778cfdbb-6781-4a32-8f35-3bcbb4f45036 supports RDMA memory domain 00:26:19.300 Initialization complete, running randrw IO for 5 sec on 2 cores 00:26:19.300 ========================================================================== 00:26:19.300 Latency [us] 00:26:19.300 IOPS MiB/s Average min max 00:26:19.300 Core 2: 18926.92 73.93 844.71 24.94 12236.21 00:26:19.300 Core 3: 19238.65 75.15 830.99 13.79 12137.15 00:26:19.300 ========================================================================== 00:26:19.300 Total : 38165.57 149.08 837.80 13.79 12236.21 00:26:19.300 00:26:19.300 Total operations: 190867, translate 190763 pull_push 0 memzero 104 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:19.300 rmmod nvme_rdma 00:26:19.300 rmmod nvme_fabrics 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@513 -- # '[' -n 3140944 ']' 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@514 -- # killprocess 3140944 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@950 -- # '[' -z 3140944 ']' 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # kill -0 3140944 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # uname 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:19.300 21:55:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3140944 00:26:19.559 21:55:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:19.559 21:55:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:19.559 21:55:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3140944' 00:26:19.559 killing process with pid 3140944 00:26:19.559 21:55:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@969 -- # kill 3140944 00:26:19.559 21:55:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@974 -- # wait 3140944 00:26:19.817 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:19.817 21:55:51 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:26:19.817 00:26:19.817 real 0m32.313s 00:26:19.817 user 1m35.096s 00:26:19.817 sys 0m6.254s 00:26:19.817 21:55:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:19.817 21:55:51 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:19.817 ************************************ 00:26:19.817 END TEST dma 00:26:19.817 ************************************ 00:26:19.817 21:55:51 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:26:19.817 21:55:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:19.817 21:55:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:19.817 21:55:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.817 ************************************ 00:26:19.817 START TEST nvmf_identify 00:26:19.817 ************************************ 00:26:19.817 21:55:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:26:19.817 * Looking for test storage... 00:26:20.076 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:20.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.076 --rc genhtml_branch_coverage=1 00:26:20.076 --rc genhtml_function_coverage=1 00:26:20.076 --rc genhtml_legend=1 00:26:20.076 --rc geninfo_all_blocks=1 00:26:20.076 --rc geninfo_unexecuted_blocks=1 00:26:20.076 00:26:20.076 ' 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:20.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.076 --rc genhtml_branch_coverage=1 00:26:20.076 --rc genhtml_function_coverage=1 00:26:20.076 --rc genhtml_legend=1 00:26:20.076 --rc geninfo_all_blocks=1 00:26:20.076 --rc geninfo_unexecuted_blocks=1 00:26:20.076 00:26:20.076 ' 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:20.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.076 --rc genhtml_branch_coverage=1 00:26:20.076 --rc genhtml_function_coverage=1 00:26:20.076 --rc genhtml_legend=1 00:26:20.076 --rc geninfo_all_blocks=1 00:26:20.076 --rc geninfo_unexecuted_blocks=1 00:26:20.076 00:26:20.076 ' 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:20.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.076 --rc genhtml_branch_coverage=1 00:26:20.076 --rc genhtml_function_coverage=1 00:26:20.076 --rc genhtml_legend=1 00:26:20.076 --rc geninfo_all_blocks=1 00:26:20.076 --rc geninfo_unexecuted_blocks=1 00:26:20.076 00:26:20.076 ' 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.076 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:20.077 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:26:20.077 21:55:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:26.636 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:26.636 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.636 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:26.637 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:26.637 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # rdma_device_init 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@526 -- # allocate_nic_ips 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:26.637 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:26.637 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:26.637 altname enp217s0f0np0 00:26:26.637 altname ens818f0np0 00:26:26.637 inet 192.168.100.8/24 scope global mlx_0_0 00:26:26.637 valid_lft forever preferred_lft forever 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:26.637 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:26.637 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:26.637 altname enp217s0f1np1 00:26:26.637 altname ens818f1np1 00:26:26.637 inet 192.168.100.9/24 scope global mlx_0_1 00:26:26.637 valid_lft forever preferred_lft forever 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:26:26.637 192.168.100.9' 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:26:26.637 192.168.100.9' 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # head -n 1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:26:26.637 192.168.100.9' 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # tail -n +2 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # head -n 1 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:26.637 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3148421 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3148421 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3148421 ']' 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:26.638 [2024-11-29 21:55:58.630216] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:26.638 [2024-11-29 21:55:58.630269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.638 [2024-11-29 21:55:58.701870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.638 [2024-11-29 21:55:58.742827] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.638 [2024-11-29 21:55:58.742868] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.638 [2024-11-29 21:55:58.742877] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.638 [2024-11-29 21:55:58.742886] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.638 [2024-11-29 21:55:58.742893] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.638 [2024-11-29 21:55:58.742939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.638 [2024-11-29 21:55:58.743037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.638 [2024-11-29 21:55:58.743128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.638 [2024-11-29 21:55:58.743130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.638 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:26.638 [2024-11-29 21:55:58.869165] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1710f50/0x1715400) succeed. 00:26:26.638 [2024-11-29 21:55:58.879535] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1712540/0x1756aa0) succeed. 00:26:26.897 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.897 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:26.897 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:26.897 21:55:58 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:26.897 Malloc0 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:26.897 [2024-11-29 21:55:59.089708] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:26.897 [ 00:26:26.897 { 00:26:26.897 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:26.897 "subtype": "Discovery", 00:26:26.897 "listen_addresses": [ 00:26:26.897 { 00:26:26.897 "trtype": "RDMA", 00:26:26.897 "adrfam": "IPv4", 00:26:26.897 "traddr": "192.168.100.8", 00:26:26.897 "trsvcid": "4420" 00:26:26.897 } 00:26:26.897 ], 00:26:26.897 "allow_any_host": true, 00:26:26.897 "hosts": [] 00:26:26.897 }, 00:26:26.897 { 00:26:26.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:26.897 "subtype": "NVMe", 00:26:26.897 "listen_addresses": [ 00:26:26.897 { 00:26:26.897 "trtype": "RDMA", 00:26:26.897 "adrfam": "IPv4", 00:26:26.897 "traddr": "192.168.100.8", 00:26:26.897 "trsvcid": "4420" 00:26:26.897 } 00:26:26.897 ], 00:26:26.897 "allow_any_host": true, 00:26:26.897 "hosts": [], 00:26:26.897 "serial_number": "SPDK00000000000001", 00:26:26.897 "model_number": "SPDK bdev Controller", 00:26:26.897 "max_namespaces": 32, 00:26:26.897 "min_cntlid": 1, 00:26:26.897 "max_cntlid": 65519, 00:26:26.897 "namespaces": [ 00:26:26.897 { 00:26:26.897 "nsid": 1, 00:26:26.897 "bdev_name": "Malloc0", 00:26:26.897 "name": "Malloc0", 00:26:26.897 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:26.897 "eui64": "ABCDEF0123456789", 00:26:26.897 "uuid": "10971257-cf17-46ec-b307-8ad430221d8d" 00:26:26.897 } 00:26:26.897 ] 00:26:26.897 } 00:26:26.897 ] 00:26:26.897 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.898 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:27.161 [2024-11-29 21:55:59.146135] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:27.161 [2024-11-29 21:55:59.146174] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3148611 ] 00:26:27.161 [2024-11-29 21:55:59.195921] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:27.161 [2024-11-29 21:55:59.195997] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:26:27.161 [2024-11-29 21:55:59.196023] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:26:27.161 [2024-11-29 21:55:59.196028] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:26:27.161 [2024-11-29 21:55:59.196063] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:27.161 [2024-11-29 21:55:59.211193] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:26:27.161 [2024-11-29 21:55:59.225789] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:26:27.161 [2024-11-29 21:55:59.225800] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:26:27.161 [2024-11-29 21:55:59.225807] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225814] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225820] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225826] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225833] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225839] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225845] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225851] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225857] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225863] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225869] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225875] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225881] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225887] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225894] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225900] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225906] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225912] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225918] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225924] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225930] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225936] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225942] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225948] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225954] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225960] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225969] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225975] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225981] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225987] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225993] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.225999] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:26:27.161 [2024-11-29 21:55:59.226005] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:26:27.161 [2024-11-29 21:55:59.226009] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:26:27.161 [2024-11-29 21:55:59.226029] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.226043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf180 len:0x400 key:0x183a00 00:26:27.161 [2024-11-29 21:55:59.231671] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.161 [2024-11-29 21:55:59.231682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:27.161 [2024-11-29 21:55:59.231690] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.231697] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:27.161 [2024-11-29 21:55:59.231705] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:27.161 [2024-11-29 21:55:59.231712] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:27.161 [2024-11-29 21:55:59.231727] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.231735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.161 [2024-11-29 21:55:59.231763] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.161 [2024-11-29 21:55:59.231781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:26:27.161 [2024-11-29 21:55:59.231788] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:27.161 [2024-11-29 21:55:59.231794] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.231801] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:27.161 [2024-11-29 21:55:59.231809] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.231817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.161 [2024-11-29 21:55:59.231837] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.161 [2024-11-29 21:55:59.231843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:26:27.161 [2024-11-29 21:55:59.231850] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:27.161 [2024-11-29 21:55:59.231856] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.231863] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:27.161 [2024-11-29 21:55:59.231875] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.231882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.161 [2024-11-29 21:55:59.231900] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.161 [2024-11-29 21:55:59.231905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:27.161 [2024-11-29 21:55:59.231912] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:27.161 [2024-11-29 21:55:59.231918] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183a00 00:26:27.161 [2024-11-29 21:55:59.231926] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.231934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.162 [2024-11-29 21:55:59.231950] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.162 [2024-11-29 21:55:59.231955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:27.162 [2024-11-29 21:55:59.231962] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:27.162 [2024-11-29 21:55:59.231968] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:27.162 [2024-11-29 21:55:59.231974] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.231980] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:27.162 [2024-11-29 21:55:59.232087] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:27.162 [2024-11-29 21:55:59.232093] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:27.162 [2024-11-29 21:55:59.232103] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.162 [2024-11-29 21:55:59.232128] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.162 [2024-11-29 21:55:59.232133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:27.162 [2024-11-29 21:55:59.232140] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:27.162 [2024-11-29 21:55:59.232146] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232154] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.162 [2024-11-29 21:55:59.232181] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.162 [2024-11-29 21:55:59.232186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:27.162 [2024-11-29 21:55:59.232192] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:27.162 [2024-11-29 21:55:59.232199] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:27.162 [2024-11-29 21:55:59.232205] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232212] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:27.162 [2024-11-29 21:55:59.232229] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:27.162 [2024-11-29 21:55:59.232239] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183a00 00:26:27.162 [2024-11-29 21:55:59.232284] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.162 [2024-11-29 21:55:59.232290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:27.162 [2024-11-29 21:55:59.232299] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:27.162 [2024-11-29 21:55:59.232306] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:27.162 [2024-11-29 21:55:59.232312] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:27.162 [2024-11-29 21:55:59.232318] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:27.162 [2024-11-29 21:55:59.232324] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:27.162 [2024-11-29 21:55:59.232330] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:27.162 [2024-11-29 21:55:59.232336] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232343] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:27.162 [2024-11-29 21:55:59.232354] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.162 [2024-11-29 21:55:59.232388] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.162 [2024-11-29 21:55:59.232394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.162 [2024-11-29 21:55:59.232403] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.162 [2024-11-29 21:55:59.232417] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.162 [2024-11-29 21:55:59.232431] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.162 [2024-11-29 21:55:59.232446] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.162 [2024-11-29 21:55:59.232459] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:27.162 [2024-11-29 21:55:59.232465] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232475] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:27.162 [2024-11-29 21:55:59.232483] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.162 [2024-11-29 21:55:59.232507] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.162 [2024-11-29 21:55:59.232513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:26:27.162 [2024-11-29 21:55:59.232520] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:27.162 [2024-11-29 21:55:59.232526] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:27.162 [2024-11-29 21:55:59.232532] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232541] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183a00 00:26:27.162 [2024-11-29 21:55:59.232572] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.162 [2024-11-29 21:55:59.232577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:27.162 [2024-11-29 21:55:59.232585] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232595] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:27.162 [2024-11-29 21:55:59.232619] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183a00 00:26:27.162 [2024-11-29 21:55:59.232635] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.162 [2024-11-29 21:55:59.232654] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.162 [2024-11-29 21:55:59.232660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:27.162 [2024-11-29 21:55:59.232676] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0ac0 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183a00 00:26:27.162 [2024-11-29 21:55:59.232690] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232698] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.162 [2024-11-29 21:55:59.232704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:27.162 [2024-11-29 21:55:59.232710] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232716] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.162 [2024-11-29 21:55:59.232722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:27.162 [2024-11-29 21:55:59.232731] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183a00 00:26:27.162 [2024-11-29 21:55:59.232739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183a00 00:26:27.162 [2024-11-29 21:55:59.232745] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x183a00 00:26:27.163 [2024-11-29 21:55:59.232764] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.163 [2024-11-29 21:55:59.232769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:27.163 [2024-11-29 21:55:59.232780] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x183a00 00:26:27.163 ===================================================== 00:26:27.163 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:27.163 ===================================================== 00:26:27.163 Controller Capabilities/Features 00:26:27.163 ================================ 00:26:27.163 Vendor ID: 0000 00:26:27.163 Subsystem Vendor ID: 0000 00:26:27.163 Serial Number: .................... 00:26:27.163 Model Number: ........................................ 00:26:27.163 Firmware Version: 24.09.1 00:26:27.163 Recommended Arb Burst: 0 00:26:27.163 IEEE OUI Identifier: 00 00 00 00:26:27.163 Multi-path I/O 00:26:27.163 May have multiple subsystem ports: No 00:26:27.163 May have multiple controllers: No 00:26:27.163 Associated with SR-IOV VF: No 00:26:27.163 Max Data Transfer Size: 131072 00:26:27.163 Max Number of Namespaces: 0 00:26:27.163 Max Number of I/O Queues: 1024 00:26:27.163 NVMe Specification Version (VS): 1.3 00:26:27.163 NVMe Specification Version (Identify): 1.3 00:26:27.163 Maximum Queue Entries: 128 00:26:27.163 Contiguous Queues Required: Yes 00:26:27.163 Arbitration Mechanisms Supported 00:26:27.163 Weighted Round Robin: Not Supported 00:26:27.163 Vendor Specific: Not Supported 00:26:27.163 Reset Timeout: 15000 ms 00:26:27.163 Doorbell Stride: 4 bytes 00:26:27.163 NVM Subsystem Reset: Not Supported 00:26:27.163 Command Sets Supported 00:26:27.163 NVM Command Set: Supported 00:26:27.163 Boot Partition: Not Supported 00:26:27.163 Memory Page Size Minimum: 4096 bytes 00:26:27.163 Memory Page Size Maximum: 4096 bytes 00:26:27.163 Persistent Memory Region: Not Supported 00:26:27.163 Optional Asynchronous Events Supported 00:26:27.163 Namespace Attribute Notices: Not Supported 00:26:27.163 Firmware Activation Notices: Not Supported 00:26:27.163 ANA Change Notices: Not Supported 00:26:27.163 PLE Aggregate Log Change Notices: Not Supported 00:26:27.163 LBA Status Info Alert Notices: Not Supported 00:26:27.163 EGE Aggregate Log Change Notices: Not Supported 00:26:27.163 Normal NVM Subsystem Shutdown event: Not Supported 00:26:27.163 Zone Descriptor Change Notices: Not Supported 00:26:27.163 Discovery Log Change Notices: Supported 00:26:27.163 Controller Attributes 00:26:27.163 128-bit Host Identifier: Not Supported 00:26:27.163 Non-Operational Permissive Mode: Not Supported 00:26:27.163 NVM Sets: Not Supported 00:26:27.163 Read Recovery Levels: Not Supported 00:26:27.163 Endurance Groups: Not Supported 00:26:27.163 Predictable Latency Mode: Not Supported 00:26:27.163 Traffic Based Keep ALive: Not Supported 00:26:27.163 Namespace Granularity: Not Supported 00:26:27.163 SQ Associations: Not Supported 00:26:27.163 UUID List: Not Supported 00:26:27.163 Multi-Domain Subsystem: Not Supported 00:26:27.163 Fixed Capacity Management: Not Supported 00:26:27.163 Variable Capacity Management: Not Supported 00:26:27.163 Delete Endurance Group: Not Supported 00:26:27.163 Delete NVM Set: Not Supported 00:26:27.163 Extended LBA Formats Supported: Not Supported 00:26:27.163 Flexible Data Placement Supported: Not Supported 00:26:27.163 00:26:27.163 Controller Memory Buffer Support 00:26:27.163 ================================ 00:26:27.163 Supported: No 00:26:27.163 00:26:27.163 Persistent Memory Region Support 00:26:27.163 ================================ 00:26:27.163 Supported: No 00:26:27.163 00:26:27.163 Admin Command Set Attributes 00:26:27.163 ============================ 00:26:27.163 Security Send/Receive: Not Supported 00:26:27.163 Format NVM: Not Supported 00:26:27.163 Firmware Activate/Download: Not Supported 00:26:27.163 Namespace Management: Not Supported 00:26:27.163 Device Self-Test: Not Supported 00:26:27.163 Directives: Not Supported 00:26:27.163 NVMe-MI: Not Supported 00:26:27.163 Virtualization Management: Not Supported 00:26:27.163 Doorbell Buffer Config: Not Supported 00:26:27.163 Get LBA Status Capability: Not Supported 00:26:27.163 Command & Feature Lockdown Capability: Not Supported 00:26:27.163 Abort Command Limit: 1 00:26:27.163 Async Event Request Limit: 4 00:26:27.163 Number of Firmware Slots: N/A 00:26:27.163 Firmware Slot 1 Read-Only: N/A 00:26:27.163 Firmware Activation Without Reset: N/A 00:26:27.163 Multiple Update Detection Support: N/A 00:26:27.163 Firmware Update Granularity: No Information Provided 00:26:27.163 Per-Namespace SMART Log: No 00:26:27.163 Asymmetric Namespace Access Log Page: Not Supported 00:26:27.163 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:27.163 Command Effects Log Page: Not Supported 00:26:27.163 Get Log Page Extended Data: Supported 00:26:27.163 Telemetry Log Pages: Not Supported 00:26:27.163 Persistent Event Log Pages: Not Supported 00:26:27.163 Supported Log Pages Log Page: May Support 00:26:27.163 Commands Supported & Effects Log Page: Not Supported 00:26:27.163 Feature Identifiers & Effects Log Page:May Support 00:26:27.163 NVMe-MI Commands & Effects Log Page: May Support 00:26:27.163 Data Area 4 for Telemetry Log: Not Supported 00:26:27.163 Error Log Page Entries Supported: 128 00:26:27.163 Keep Alive: Not Supported 00:26:27.163 00:26:27.163 NVM Command Set Attributes 00:26:27.163 ========================== 00:26:27.163 Submission Queue Entry Size 00:26:27.163 Max: 1 00:26:27.163 Min: 1 00:26:27.163 Completion Queue Entry Size 00:26:27.163 Max: 1 00:26:27.163 Min: 1 00:26:27.163 Number of Namespaces: 0 00:26:27.163 Compare Command: Not Supported 00:26:27.163 Write Uncorrectable Command: Not Supported 00:26:27.163 Dataset Management Command: Not Supported 00:26:27.163 Write Zeroes Command: Not Supported 00:26:27.163 Set Features Save Field: Not Supported 00:26:27.163 Reservations: Not Supported 00:26:27.163 Timestamp: Not Supported 00:26:27.163 Copy: Not Supported 00:26:27.163 Volatile Write Cache: Not Present 00:26:27.163 Atomic Write Unit (Normal): 1 00:26:27.163 Atomic Write Unit (PFail): 1 00:26:27.163 Atomic Compare & Write Unit: 1 00:26:27.163 Fused Compare & Write: Supported 00:26:27.163 Scatter-Gather List 00:26:27.163 SGL Command Set: Supported 00:26:27.163 SGL Keyed: Supported 00:26:27.163 SGL Bit Bucket Descriptor: Not Supported 00:26:27.163 SGL Metadata Pointer: Not Supported 00:26:27.163 Oversized SGL: Not Supported 00:26:27.163 SGL Metadata Address: Not Supported 00:26:27.163 SGL Offset: Supported 00:26:27.163 Transport SGL Data Block: Not Supported 00:26:27.163 Replay Protected Memory Block: Not Supported 00:26:27.163 00:26:27.163 Firmware Slot Information 00:26:27.163 ========================= 00:26:27.163 Active slot: 0 00:26:27.163 00:26:27.163 00:26:27.163 Error Log 00:26:27.163 ========= 00:26:27.163 00:26:27.163 Active Namespaces 00:26:27.163 ================= 00:26:27.163 Discovery Log Page 00:26:27.163 ================== 00:26:27.163 Generation Counter: 2 00:26:27.163 Number of Records: 2 00:26:27.163 Record Format: 0 00:26:27.163 00:26:27.163 Discovery Log Entry 0 00:26:27.163 ---------------------- 00:26:27.163 Transport Type: 1 (RDMA) 00:26:27.163 Address Family: 1 (IPv4) 00:26:27.163 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:27.163 Entry Flags: 00:26:27.163 Duplicate Returned Information: 1 00:26:27.163 Explicit Persistent Connection Support for Discovery: 1 00:26:27.163 Transport Requirements: 00:26:27.163 Secure Channel: Not Required 00:26:27.163 Port ID: 0 (0x0000) 00:26:27.163 Controller ID: 65535 (0xffff) 00:26:27.163 Admin Max SQ Size: 128 00:26:27.163 Transport Service Identifier: 4420 00:26:27.163 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:27.163 Transport Address: 192.168.100.8 00:26:27.163 Transport Specific Address Subtype - RDMA 00:26:27.163 RDMA QP Service Type: 1 (Reliable Connected) 00:26:27.163 RDMA Provider Type: 1 (No provider specified) 00:26:27.163 RDMA CM Service: 1 (RDMA_CM) 00:26:27.163 Discovery Log Entry 1 00:26:27.163 ---------------------- 00:26:27.163 Transport Type: 1 (RDMA) 00:26:27.163 Address Family: 1 (IPv4) 00:26:27.163 Subsystem Type: 2 (NVM Subsystem) 00:26:27.163 Entry Flags: 00:26:27.163 Duplicate Returned Information: 0 00:26:27.163 Explicit Persistent Connection Support for Discovery: 0 00:26:27.163 Transport Requirements: 00:26:27.163 Secure Channel: Not Required 00:26:27.163 Port ID: 0 (0x0000) 00:26:27.163 Controller ID: 65535 (0xffff) 00:26:27.163 Admin Max SQ Size: [2024-11-29 21:55:59.232854] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:27.164 [2024-11-29 21:55:59.232864] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 43834 doesn't match qid 00:26:27.164 [2024-11-29 21:55:59.232878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32574 cdw0:5 sqhd:99b0 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.232884] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 43834 doesn't match qid 00:26:27.164 [2024-11-29 21:55:59.232892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32574 cdw0:5 sqhd:99b0 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.232899] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 43834 doesn't match qid 00:26:27.164 [2024-11-29 21:55:59.232906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32574 cdw0:5 sqhd:99b0 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.232913] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 43834 doesn't match qid 00:26:27.164 [2024-11-29 21:55:59.232920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32574 cdw0:5 sqhd:99b0 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.232929] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.232936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.232956] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.232961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.232970] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.232978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.232984] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.232999] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233011] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:27.164 [2024-11-29 21:55:59.233019] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:27.164 [2024-11-29 21:55:59.233025] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233033] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233057] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233069] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233078] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233104] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233116] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233125] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233156] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233168] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233177] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233200] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233212] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233221] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233245] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233257] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233266] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233293] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233305] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233314] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233343] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233355] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233364] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233386] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233397] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233406] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233430] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233442] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233451] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233478] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233490] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233499] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233526] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233537] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233546] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233573] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233584] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233593] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233624] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233636] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233644] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.164 [2024-11-29 21:55:59.233652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.164 [2024-11-29 21:55:59.233677] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.164 [2024-11-29 21:55:59.233683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:26:27.164 [2024-11-29 21:55:59.233689] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233698] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.233725] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.233731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.233737] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233746] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.233773] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.233779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.233785] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233794] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.233817] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.233823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.233829] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233837] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.233861] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.233866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.233872] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233881] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.233912] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.233918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.233924] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233933] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.233962] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.233967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.233974] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233982] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.233990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.234015] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.234020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.234027] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234035] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.234064] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.234070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.234076] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234085] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.234114] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.234119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.234126] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234134] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.234157] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.234163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.234169] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234178] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.234203] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.234208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.234215] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234223] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.234254] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.234260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.234266] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234275] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.234304] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.234309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.234316] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234324] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.234349] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.234355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.234361] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234370] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.165 [2024-11-29 21:55:59.234395] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.165 [2024-11-29 21:55:59.234401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:26:27.165 [2024-11-29 21:55:59.234407] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x183a00 00:26:27.165 [2024-11-29 21:55:59.234417] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.234442] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.234448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.234454] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234463] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.234492] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.234497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.234504] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234512] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.234541] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.234547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.234553] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234562] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.234593] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.234598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.234604] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234613] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.234642] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.234648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.234654] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234663] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.234688] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.234693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.234699] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234710] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.234737] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.234742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.234749] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234757] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.234788] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.234794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.234800] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234809] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.234832] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.234837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.234844] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234852] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.234878] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.234883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.234889] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234898] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.234923] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.234929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.234935] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234944] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.234971] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.234976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.234984] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.234993] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.235000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.235018] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.235023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.235029] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.235038] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.235046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.235063] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.235069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.235075] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.235084] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.235091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.235111] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.235116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.235123] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.235131] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.235139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.235158] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.235164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.235170] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.235179] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.235186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.235204] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.235210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.235216] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.235224] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.235232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.166 [2024-11-29 21:55:59.235246] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.166 [2024-11-29 21:55:59.235251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:26:27.166 [2024-11-29 21:55:59.235259] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183a00 00:26:27.166 [2024-11-29 21:55:59.235268] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.167 [2024-11-29 21:55:59.235291] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.167 [2024-11-29 21:55:59.235297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:27.167 [2024-11-29 21:55:59.235303] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235311] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.167 [2024-11-29 21:55:59.235337] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.167 [2024-11-29 21:55:59.235342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:26:27.167 [2024-11-29 21:55:59.235348] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235357] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.167 [2024-11-29 21:55:59.235380] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.167 [2024-11-29 21:55:59.235386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:26:27.167 [2024-11-29 21:55:59.235392] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235401] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.167 [2024-11-29 21:55:59.235424] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.167 [2024-11-29 21:55:59.235430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:26:27.167 [2024-11-29 21:55:59.235436] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235445] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.167 [2024-11-29 21:55:59.235472] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.167 [2024-11-29 21:55:59.235477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:26:27.167 [2024-11-29 21:55:59.235483] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235492] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.167 [2024-11-29 21:55:59.235517] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.167 [2024-11-29 21:55:59.235524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:26:27.167 [2024-11-29 21:55:59.235531] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235539] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.167 [2024-11-29 21:55:59.235563] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.167 [2024-11-29 21:55:59.235568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:27.167 [2024-11-29 21:55:59.235574] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235583] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.167 [2024-11-29 21:55:59.235606] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.167 [2024-11-29 21:55:59.235612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:26:27.167 [2024-11-29 21:55:59.235618] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235627] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.235634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.167 [2024-11-29 21:55:59.235648] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.167 [2024-11-29 21:55:59.235654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:26:27.167 [2024-11-29 21:55:59.235660] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.239673] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.239682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.167 [2024-11-29 21:55:59.239702] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.167 [2024-11-29 21:55:59.239708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:26:27.167 [2024-11-29 21:55:59.239715] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.239721] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:26:27.167 128 00:26:27.167 Transport Service Identifier: 4420 00:26:27.167 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:27.167 Transport Address: 192.168.100.8 00:26:27.167 Transport Specific Address Subtype - RDMA 00:26:27.167 RDMA QP Service Type: 1 (Reliable Connected) 00:26:27.167 RDMA Provider Type: 1 (No provider specified) 00:26:27.167 RDMA CM Service: 1 (RDMA_CM) 00:26:27.167 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:27.167 [2024-11-29 21:55:59.314638] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:27.167 [2024-11-29 21:55:59.314704] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3148613 ] 00:26:27.167 [2024-11-29 21:55:59.360925] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:27.167 [2024-11-29 21:55:59.360991] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:26:27.167 [2024-11-29 21:55:59.361009] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:26:27.167 [2024-11-29 21:55:59.361014] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:26:27.167 [2024-11-29 21:55:59.361038] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:27.167 [2024-11-29 21:55:59.372144] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:26:27.167 [2024-11-29 21:55:59.386261] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:26:27.167 [2024-11-29 21:55:59.386271] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:26:27.167 [2024-11-29 21:55:59.386278] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386286] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386292] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386298] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386304] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386311] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386317] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386323] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386329] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386335] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386342] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386348] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386354] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386360] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386367] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386373] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386379] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386385] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386392] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386398] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386406] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386413] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x183a00 00:26:27.167 [2024-11-29 21:55:59.386419] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.386425] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.386432] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.386438] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.386444] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.386450] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.386456] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.386463] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.386469] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.386475] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:26:27.168 [2024-11-29 21:55:59.386480] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:26:27.168 [2024-11-29 21:55:59.386484] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:26:27.168 [2024-11-29 21:55:59.386500] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.386512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf180 len:0x400 key:0x183a00 00:26:27.168 [2024-11-29 21:55:59.391672] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.168 [2024-11-29 21:55:59.391682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:27.168 [2024-11-29 21:55:59.391689] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.391699] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:27.168 [2024-11-29 21:55:59.391706] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:27.168 [2024-11-29 21:55:59.391712] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:27.168 [2024-11-29 21:55:59.391724] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.391733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.168 [2024-11-29 21:55:59.391755] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.168 [2024-11-29 21:55:59.391761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:26:27.168 [2024-11-29 21:55:59.391768] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:27.168 [2024-11-29 21:55:59.391774] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.391781] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:27.168 [2024-11-29 21:55:59.391789] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.391796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.168 [2024-11-29 21:55:59.391814] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.168 [2024-11-29 21:55:59.391820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:26:27.168 [2024-11-29 21:55:59.391827] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:27.168 [2024-11-29 21:55:59.391833] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.391840] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:27.168 [2024-11-29 21:55:59.391848] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.391855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.168 [2024-11-29 21:55:59.391871] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.168 [2024-11-29 21:55:59.391877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:27.168 [2024-11-29 21:55:59.391884] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:27.168 [2024-11-29 21:55:59.391890] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.391898] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.391906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.168 [2024-11-29 21:55:59.391922] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.168 [2024-11-29 21:55:59.391928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:27.168 [2024-11-29 21:55:59.391934] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:27.168 [2024-11-29 21:55:59.391940] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:27.168 [2024-11-29 21:55:59.391946] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.391953] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:27.168 [2024-11-29 21:55:59.392059] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:27.168 [2024-11-29 21:55:59.392064] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:27.168 [2024-11-29 21:55:59.392072] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.392080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.168 [2024-11-29 21:55:59.392102] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.168 [2024-11-29 21:55:59.392107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:27.168 [2024-11-29 21:55:59.392114] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:27.168 [2024-11-29 21:55:59.392120] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.392128] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.392138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.168 [2024-11-29 21:55:59.392154] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.168 [2024-11-29 21:55:59.392159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:27.168 [2024-11-29 21:55:59.392165] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:27.168 [2024-11-29 21:55:59.392171] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:27.168 [2024-11-29 21:55:59.392177] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.392184] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:27.168 [2024-11-29 21:55:59.392193] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:27.168 [2024-11-29 21:55:59.392202] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.392210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183a00 00:26:27.168 [2024-11-29 21:55:59.392247] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.168 [2024-11-29 21:55:59.392253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:27.168 [2024-11-29 21:55:59.392261] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:27.168 [2024-11-29 21:55:59.392267] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:27.168 [2024-11-29 21:55:59.392273] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:27.168 [2024-11-29 21:55:59.392278] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:27.168 [2024-11-29 21:55:59.392284] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:27.168 [2024-11-29 21:55:59.392290] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:27.168 [2024-11-29 21:55:59.392296] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.392303] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:27.168 [2024-11-29 21:55:59.392313] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.392322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.168 [2024-11-29 21:55:59.392338] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.168 [2024-11-29 21:55:59.392343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.168 [2024-11-29 21:55:59.392351] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.392358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.168 [2024-11-29 21:55:59.392366] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.392374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.168 [2024-11-29 21:55:59.392381] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.168 [2024-11-29 21:55:59.392388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.168 [2024-11-29 21:55:59.392395] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.169 [2024-11-29 21:55:59.392408] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392414] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392427] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392434] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.169 [2024-11-29 21:55:59.392460] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.169 [2024-11-29 21:55:59.392466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:26:27.169 [2024-11-29 21:55:59.392472] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:27.169 [2024-11-29 21:55:59.392479] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392484] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392494] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392502] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392509] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.169 [2024-11-29 21:55:59.392541] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.169 [2024-11-29 21:55:59.392546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:26:27.169 [2024-11-29 21:55:59.392598] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392604] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392612] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392621] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183a00 00:26:27.169 [2024-11-29 21:55:59.392654] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.169 [2024-11-29 21:55:59.392661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:27.169 [2024-11-29 21:55:59.392680] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:27.169 [2024-11-29 21:55:59.392694] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392701] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392708] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392716] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183a00 00:26:27.169 [2024-11-29 21:55:59.392752] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.169 [2024-11-29 21:55:59.392757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:27.169 [2024-11-29 21:55:59.392768] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392774] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392782] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392790] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183a00 00:26:27.169 [2024-11-29 21:55:59.392822] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.169 [2024-11-29 21:55:59.392827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:27.169 [2024-11-29 21:55:59.392839] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392845] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392853] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392862] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392869] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392876] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392882] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392889] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:27.169 [2024-11-29 21:55:59.392895] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:27.169 [2024-11-29 21:55:59.392901] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:27.169 [2024-11-29 21:55:59.392917] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.169 [2024-11-29 21:55:59.392932] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.169 [2024-11-29 21:55:59.392950] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.169 [2024-11-29 21:55:59.392956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:27.169 [2024-11-29 21:55:59.392963] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392969] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.169 [2024-11-29 21:55:59.392974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:27.169 [2024-11-29 21:55:59.392981] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392990] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.392998] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.169 [2024-11-29 21:55:59.393015] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.169 [2024-11-29 21:55:59.393020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:27.169 [2024-11-29 21:55:59.393027] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.393036] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.393043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.169 [2024-11-29 21:55:59.393060] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.169 [2024-11-29 21:55:59.393066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:27.169 [2024-11-29 21:55:59.393072] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.393081] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.393089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.169 [2024-11-29 21:55:59.393111] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.169 [2024-11-29 21:55:59.393117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:26:27.169 [2024-11-29 21:55:59.393123] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.393137] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.393145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183a00 00:26:27.169 [2024-11-29 21:55:59.393153] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.393160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183a00 00:26:27.169 [2024-11-29 21:55:59.393170] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0ac0 length 0x40 lkey 0x183a00 00:26:27.169 [2024-11-29 21:55:59.393178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183a00 00:26:27.169 [2024-11-29 21:55:59.393186] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c00 length 0x40 lkey 0x183a00 00:26:27.170 [2024-11-29 21:55:59.393193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183a00 00:26:27.170 [2024-11-29 21:55:59.393202] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.170 [2024-11-29 21:55:59.393207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:27.170 [2024-11-29 21:55:59.393219] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x183a00 00:26:27.170 [2024-11-29 21:55:59.393226] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.170 [2024-11-29 21:55:59.393231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:27.170 [2024-11-29 21:55:59.393245] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x183a00 00:26:27.170 [2024-11-29 21:55:59.393251] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.170 [2024-11-29 21:55:59.393257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:27.170 [2024-11-29 21:55:59.393264] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x183a00 00:26:27.170 [2024-11-29 21:55:59.393270] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.170 [2024-11-29 21:55:59.393275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:27.170 [2024-11-29 21:55:59.393285] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x183a00 00:26:27.170 ===================================================== 00:26:27.170 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:27.170 ===================================================== 00:26:27.170 Controller Capabilities/Features 00:26:27.170 ================================ 00:26:27.170 Vendor ID: 8086 00:26:27.170 Subsystem Vendor ID: 8086 00:26:27.170 Serial Number: SPDK00000000000001 00:26:27.170 Model Number: SPDK bdev Controller 00:26:27.170 Firmware Version: 24.09.1 00:26:27.170 Recommended Arb Burst: 6 00:26:27.170 IEEE OUI Identifier: e4 d2 5c 00:26:27.170 Multi-path I/O 00:26:27.170 May have multiple subsystem ports: Yes 00:26:27.170 May have multiple controllers: Yes 00:26:27.170 Associated with SR-IOV VF: No 00:26:27.170 Max Data Transfer Size: 131072 00:26:27.170 Max Number of Namespaces: 32 00:26:27.170 Max Number of I/O Queues: 127 00:26:27.170 NVMe Specification Version (VS): 1.3 00:26:27.170 NVMe Specification Version (Identify): 1.3 00:26:27.170 Maximum Queue Entries: 128 00:26:27.170 Contiguous Queues Required: Yes 00:26:27.170 Arbitration Mechanisms Supported 00:26:27.170 Weighted Round Robin: Not Supported 00:26:27.170 Vendor Specific: Not Supported 00:26:27.170 Reset Timeout: 15000 ms 00:26:27.170 Doorbell Stride: 4 bytes 00:26:27.170 NVM Subsystem Reset: Not Supported 00:26:27.170 Command Sets Supported 00:26:27.170 NVM Command Set: Supported 00:26:27.170 Boot Partition: Not Supported 00:26:27.170 Memory Page Size Minimum: 4096 bytes 00:26:27.170 Memory Page Size Maximum: 4096 bytes 00:26:27.170 Persistent Memory Region: Not Supported 00:26:27.170 Optional Asynchronous Events Supported 00:26:27.170 Namespace Attribute Notices: Supported 00:26:27.170 Firmware Activation Notices: Not Supported 00:26:27.170 ANA Change Notices: Not Supported 00:26:27.170 PLE Aggregate Log Change Notices: Not Supported 00:26:27.170 LBA Status Info Alert Notices: Not Supported 00:26:27.170 EGE Aggregate Log Change Notices: Not Supported 00:26:27.170 Normal NVM Subsystem Shutdown event: Not Supported 00:26:27.170 Zone Descriptor Change Notices: Not Supported 00:26:27.170 Discovery Log Change Notices: Not Supported 00:26:27.170 Controller Attributes 00:26:27.170 128-bit Host Identifier: Supported 00:26:27.170 Non-Operational Permissive Mode: Not Supported 00:26:27.170 NVM Sets: Not Supported 00:26:27.170 Read Recovery Levels: Not Supported 00:26:27.170 Endurance Groups: Not Supported 00:26:27.170 Predictable Latency Mode: Not Supported 00:26:27.170 Traffic Based Keep ALive: Not Supported 00:26:27.170 Namespace Granularity: Not Supported 00:26:27.170 SQ Associations: Not Supported 00:26:27.170 UUID List: Not Supported 00:26:27.170 Multi-Domain Subsystem: Not Supported 00:26:27.170 Fixed Capacity Management: Not Supported 00:26:27.170 Variable Capacity Management: Not Supported 00:26:27.170 Delete Endurance Group: Not Supported 00:26:27.170 Delete NVM Set: Not Supported 00:26:27.170 Extended LBA Formats Supported: Not Supported 00:26:27.170 Flexible Data Placement Supported: Not Supported 00:26:27.170 00:26:27.170 Controller Memory Buffer Support 00:26:27.170 ================================ 00:26:27.170 Supported: No 00:26:27.170 00:26:27.170 Persistent Memory Region Support 00:26:27.170 ================================ 00:26:27.170 Supported: No 00:26:27.170 00:26:27.170 Admin Command Set Attributes 00:26:27.170 ============================ 00:26:27.170 Security Send/Receive: Not Supported 00:26:27.170 Format NVM: Not Supported 00:26:27.170 Firmware Activate/Download: Not Supported 00:26:27.170 Namespace Management: Not Supported 00:26:27.170 Device Self-Test: Not Supported 00:26:27.170 Directives: Not Supported 00:26:27.170 NVMe-MI: Not Supported 00:26:27.170 Virtualization Management: Not Supported 00:26:27.170 Doorbell Buffer Config: Not Supported 00:26:27.170 Get LBA Status Capability: Not Supported 00:26:27.170 Command & Feature Lockdown Capability: Not Supported 00:26:27.170 Abort Command Limit: 4 00:26:27.170 Async Event Request Limit: 4 00:26:27.170 Number of Firmware Slots: N/A 00:26:27.170 Firmware Slot 1 Read-Only: N/A 00:26:27.170 Firmware Activation Without Reset: N/A 00:26:27.170 Multiple Update Detection Support: N/A 00:26:27.170 Firmware Update Granularity: No Information Provided 00:26:27.170 Per-Namespace SMART Log: No 00:26:27.170 Asymmetric Namespace Access Log Page: Not Supported 00:26:27.170 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:27.170 Command Effects Log Page: Supported 00:26:27.170 Get Log Page Extended Data: Supported 00:26:27.170 Telemetry Log Pages: Not Supported 00:26:27.170 Persistent Event Log Pages: Not Supported 00:26:27.170 Supported Log Pages Log Page: May Support 00:26:27.170 Commands Supported & Effects Log Page: Not Supported 00:26:27.170 Feature Identifiers & Effects Log Page:May Support 00:26:27.170 NVMe-MI Commands & Effects Log Page: May Support 00:26:27.170 Data Area 4 for Telemetry Log: Not Supported 00:26:27.170 Error Log Page Entries Supported: 128 00:26:27.170 Keep Alive: Supported 00:26:27.170 Keep Alive Granularity: 10000 ms 00:26:27.170 00:26:27.170 NVM Command Set Attributes 00:26:27.170 ========================== 00:26:27.170 Submission Queue Entry Size 00:26:27.170 Max: 64 00:26:27.170 Min: 64 00:26:27.170 Completion Queue Entry Size 00:26:27.170 Max: 16 00:26:27.170 Min: 16 00:26:27.170 Number of Namespaces: 32 00:26:27.170 Compare Command: Supported 00:26:27.170 Write Uncorrectable Command: Not Supported 00:26:27.170 Dataset Management Command: Supported 00:26:27.170 Write Zeroes Command: Supported 00:26:27.170 Set Features Save Field: Not Supported 00:26:27.170 Reservations: Supported 00:26:27.170 Timestamp: Not Supported 00:26:27.170 Copy: Supported 00:26:27.170 Volatile Write Cache: Present 00:26:27.170 Atomic Write Unit (Normal): 1 00:26:27.170 Atomic Write Unit (PFail): 1 00:26:27.170 Atomic Compare & Write Unit: 1 00:26:27.170 Fused Compare & Write: Supported 00:26:27.170 Scatter-Gather List 00:26:27.170 SGL Command Set: Supported 00:26:27.170 SGL Keyed: Supported 00:26:27.170 SGL Bit Bucket Descriptor: Not Supported 00:26:27.170 SGL Metadata Pointer: Not Supported 00:26:27.170 Oversized SGL: Not Supported 00:26:27.170 SGL Metadata Address: Not Supported 00:26:27.170 SGL Offset: Supported 00:26:27.170 Transport SGL Data Block: Not Supported 00:26:27.170 Replay Protected Memory Block: Not Supported 00:26:27.170 00:26:27.170 Firmware Slot Information 00:26:27.170 ========================= 00:26:27.170 Active slot: 1 00:26:27.171 Slot 1 Firmware Revision: 24.09.1 00:26:27.171 00:26:27.171 00:26:27.171 Commands Supported and Effects 00:26:27.171 ============================== 00:26:27.171 Admin Commands 00:26:27.171 -------------- 00:26:27.171 Get Log Page (02h): Supported 00:26:27.171 Identify (06h): Supported 00:26:27.171 Abort (08h): Supported 00:26:27.171 Set Features (09h): Supported 00:26:27.171 Get Features (0Ah): Supported 00:26:27.171 Asynchronous Event Request (0Ch): Supported 00:26:27.171 Keep Alive (18h): Supported 00:26:27.171 I/O Commands 00:26:27.171 ------------ 00:26:27.171 Flush (00h): Supported LBA-Change 00:26:27.171 Write (01h): Supported LBA-Change 00:26:27.171 Read (02h): Supported 00:26:27.171 Compare (05h): Supported 00:26:27.171 Write Zeroes (08h): Supported LBA-Change 00:26:27.171 Dataset Management (09h): Supported LBA-Change 00:26:27.171 Copy (19h): Supported LBA-Change 00:26:27.171 00:26:27.171 Error Log 00:26:27.171 ========= 00:26:27.171 00:26:27.171 Arbitration 00:26:27.171 =========== 00:26:27.171 Arbitration Burst: 1 00:26:27.171 00:26:27.171 Power Management 00:26:27.171 ================ 00:26:27.171 Number of Power States: 1 00:26:27.171 Current Power State: Power State #0 00:26:27.171 Power State #0: 00:26:27.171 Max Power: 0.00 W 00:26:27.171 Non-Operational State: Operational 00:26:27.171 Entry Latency: Not Reported 00:26:27.171 Exit Latency: Not Reported 00:26:27.171 Relative Read Throughput: 0 00:26:27.171 Relative Read Latency: 0 00:26:27.171 Relative Write Throughput: 0 00:26:27.171 Relative Write Latency: 0 00:26:27.171 Idle Power: Not Reported 00:26:27.171 Active Power: Not Reported 00:26:27.171 Non-Operational Permissive Mode: Not Supported 00:26:27.171 00:26:27.171 Health Information 00:26:27.171 ================== 00:26:27.171 Critical Warnings: 00:26:27.171 Available Spare Space: OK 00:26:27.171 Temperature: OK 00:26:27.171 Device Reliability: OK 00:26:27.171 Read Only: No 00:26:27.171 Volatile Memory Backup: OK 00:26:27.171 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:27.171 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:27.171 Available Spare: 0% 00:26:27.171 Available Spare Threshold: 0% 00:26:27.171 Life Percent[2024-11-29 21:55:59.393366] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c00 length 0x40 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393374] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.171 [2024-11-29 21:55:59.393395] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.171 [2024-11-29 21:55:59.393401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393407] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393436] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:27.171 [2024-11-29 21:55:59.393445] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51556 doesn't match qid 00:26:27.171 [2024-11-29 21:55:59.393459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:5 sqhd:e9b0 p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393466] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51556 doesn't match qid 00:26:27.171 [2024-11-29 21:55:59.393474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:5 sqhd:e9b0 p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393480] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51556 doesn't match qid 00:26:27.171 [2024-11-29 21:55:59.393488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:5 sqhd:e9b0 p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393496] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51556 doesn't match qid 00:26:27.171 [2024-11-29 21:55:59.393503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:5 sqhd:e9b0 p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393512] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.171 [2024-11-29 21:55:59.393534] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.171 [2024-11-29 21:55:59.393540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393548] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.171 [2024-11-29 21:55:59.393562] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393578] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.171 [2024-11-29 21:55:59.393584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393590] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:27.171 [2024-11-29 21:55:59.393596] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:27.171 [2024-11-29 21:55:59.393602] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393610] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.171 [2024-11-29 21:55:59.393638] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.171 [2024-11-29 21:55:59.393644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393650] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393659] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.171 [2024-11-29 21:55:59.393688] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.171 [2024-11-29 21:55:59.393694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393700] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393709] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.171 [2024-11-29 21:55:59.393731] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.171 [2024-11-29 21:55:59.393737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393744] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393755] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.171 [2024-11-29 21:55:59.393781] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.171 [2024-11-29 21:55:59.393786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393793] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393802] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.171 [2024-11-29 21:55:59.393831] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.171 [2024-11-29 21:55:59.393837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393843] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393852] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.171 [2024-11-29 21:55:59.393880] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.171 [2024-11-29 21:55:59.393885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393892] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393901] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.171 [2024-11-29 21:55:59.393928] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.171 [2024-11-29 21:55:59.393934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393941] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393950] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.171 [2024-11-29 21:55:59.393958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.171 [2024-11-29 21:55:59.393974] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.171 [2024-11-29 21:55:59.393980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:26:27.171 [2024-11-29 21:55:59.393987] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.393996] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394026] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394038] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394049] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394072] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394084] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394093] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394120] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394132] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394141] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394171] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394184] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394193] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394221] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394233] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394242] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394265] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394278] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394287] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394317] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394331] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394340] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394370] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394382] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394391] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394420] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394433] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394442] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394470] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394483] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394492] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394516] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394528] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394536] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394566] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394579] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394589] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394613] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394627] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394635] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394657] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394674] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394683] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394706] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394718] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394728] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394751] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394763] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394771] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394800] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394813] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394822] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.172 [2024-11-29 21:55:59.394851] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.172 [2024-11-29 21:55:59.394857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:26:27.172 [2024-11-29 21:55:59.394864] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x183a00 00:26:27.172 [2024-11-29 21:55:59.394873] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.394882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.394901] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.394910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.394917] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.394925] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.394933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.394953] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.394960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.394966] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.394975] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.394984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.394998] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395011] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395019] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395045] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395058] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395067] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395096] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395108] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395117] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395144] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395155] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395164] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395188] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395202] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395211] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395239] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395252] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395261] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395289] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395300] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395309] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395338] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395350] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395359] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395384] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395396] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395405] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395427] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395438] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395447] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395472] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395484] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395493] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395520] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395532] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395540] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395569] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395581] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395590] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395614] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.395619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.395625] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395634] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.395642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.395661] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.399674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.399683] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.399693] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.399701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:26:27.173 [2024-11-29 21:55:59.399721] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:26:27.173 [2024-11-29 21:55:59.399727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0006 p:0 m:0 dnr:0 00:26:27.173 [2024-11-29 21:55:59.399733] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x183a00 00:26:27.173 [2024-11-29 21:55:59.399740] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:26:27.434 age Used: 0% 00:26:27.434 Data Units Read: 0 00:26:27.434 Data Units Written: 0 00:26:27.434 Host Read Commands: 0 00:26:27.434 Host Write Commands: 0 00:26:27.434 Controller Busy Time: 0 minutes 00:26:27.434 Power Cycles: 0 00:26:27.434 Power On Hours: 0 hours 00:26:27.434 Unsafe Shutdowns: 0 00:26:27.434 Unrecoverable Media Errors: 0 00:26:27.434 Lifetime Error Log Entries: 0 00:26:27.434 Warning Temperature Time: 0 minutes 00:26:27.434 Critical Temperature Time: 0 minutes 00:26:27.434 00:26:27.434 Number of Queues 00:26:27.434 ================ 00:26:27.434 Number of I/O Submission Queues: 127 00:26:27.434 Number of I/O Completion Queues: 127 00:26:27.434 00:26:27.434 Active Namespaces 00:26:27.434 ================= 00:26:27.434 Namespace ID:1 00:26:27.434 Error Recovery Timeout: Unlimited 00:26:27.434 Command Set Identifier: NVM (00h) 00:26:27.434 Deallocate: Supported 00:26:27.434 Deallocated/Unwritten Error: Not Supported 00:26:27.434 Deallocated Read Value: Unknown 00:26:27.434 Deallocate in Write Zeroes: Not Supported 00:26:27.434 Deallocated Guard Field: 0xFFFF 00:26:27.434 Flush: Supported 00:26:27.434 Reservation: Supported 00:26:27.434 Namespace Sharing Capabilities: Multiple Controllers 00:26:27.434 Size (in LBAs): 131072 (0GiB) 00:26:27.434 Capacity (in LBAs): 131072 (0GiB) 00:26:27.434 Utilization (in LBAs): 131072 (0GiB) 00:26:27.434 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:27.434 EUI64: ABCDEF0123456789 00:26:27.434 UUID: 10971257-cf17-46ec-b307-8ad430221d8d 00:26:27.434 Thin Provisioning: Not Supported 00:26:27.434 Per-NS Atomic Units: Yes 00:26:27.434 Atomic Boundary Size (Normal): 0 00:26:27.434 Atomic Boundary Size (PFail): 0 00:26:27.434 Atomic Boundary Offset: 0 00:26:27.434 Maximum Single Source Range Length: 65535 00:26:27.434 Maximum Copy Length: 65535 00:26:27.434 Maximum Source Range Count: 1 00:26:27.434 NGUID/EUI64 Never Reused: No 00:26:27.434 Namespace Write Protected: No 00:26:27.434 Number of LBA Formats: 1 00:26:27.434 Current LBA Format: LBA Format #00 00:26:27.434 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:27.434 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:27.434 rmmod nvme_rdma 00:26:27.434 rmmod nvme_fabrics 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 3148421 ']' 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 3148421 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3148421 ']' 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3148421 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3148421 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3148421' 00:26:27.434 killing process with pid 3148421 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3148421 00:26:27.434 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3148421 00:26:27.693 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:27.694 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:26:27.694 00:26:27.694 real 0m7.884s 00:26:27.694 user 0m6.035s 00:26:27.694 sys 0m5.438s 00:26:27.694 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:27.694 21:55:59 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:27.694 ************************************ 00:26:27.694 END TEST nvmf_identify 00:26:27.694 ************************************ 00:26:27.694 21:55:59 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:26:27.694 21:55:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:27.694 21:55:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:27.694 21:55:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.694 ************************************ 00:26:27.694 START TEST nvmf_perf 00:26:27.694 ************************************ 00:26:27.694 21:55:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:26:27.954 * Looking for test storage... 00:26:27.954 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:27.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.954 --rc genhtml_branch_coverage=1 00:26:27.954 --rc genhtml_function_coverage=1 00:26:27.954 --rc genhtml_legend=1 00:26:27.954 --rc geninfo_all_blocks=1 00:26:27.954 --rc geninfo_unexecuted_blocks=1 00:26:27.954 00:26:27.954 ' 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:27.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.954 --rc genhtml_branch_coverage=1 00:26:27.954 --rc genhtml_function_coverage=1 00:26:27.954 --rc genhtml_legend=1 00:26:27.954 --rc geninfo_all_blocks=1 00:26:27.954 --rc geninfo_unexecuted_blocks=1 00:26:27.954 00:26:27.954 ' 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:27.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.954 --rc genhtml_branch_coverage=1 00:26:27.954 --rc genhtml_function_coverage=1 00:26:27.954 --rc genhtml_legend=1 00:26:27.954 --rc geninfo_all_blocks=1 00:26:27.954 --rc geninfo_unexecuted_blocks=1 00:26:27.954 00:26:27.954 ' 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:27.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.954 --rc genhtml_branch_coverage=1 00:26:27.954 --rc genhtml_function_coverage=1 00:26:27.954 --rc genhtml_legend=1 00:26:27.954 --rc geninfo_all_blocks=1 00:26:27.954 --rc geninfo_unexecuted_blocks=1 00:26:27.954 00:26:27.954 ' 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.954 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:27.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:27.955 21:56:00 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:34.602 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:34.602 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:34.602 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:34.602 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:34.602 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # rdma_device_init 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@526 -- # allocate_nic_ips 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:34.603 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:34.861 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:34.861 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:34.861 altname enp217s0f0np0 00:26:34.861 altname ens818f0np0 00:26:34.861 inet 192.168.100.8/24 scope global mlx_0_0 00:26:34.861 valid_lft forever preferred_lft forever 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:34.861 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:34.861 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:34.861 altname enp217s0f1np1 00:26:34.861 altname ens818f1np1 00:26:34.861 inet 192.168.100.9/24 scope global mlx_0_1 00:26:34.861 valid_lft forever preferred_lft forever 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:34.861 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:26:34.862 192.168.100.9' 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:26:34.862 192.168.100.9' 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # head -n 1 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:26:34.862 192.168.100.9' 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # tail -n +2 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # head -n 1 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:26:34.862 21:56:06 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:26:34.862 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:34.862 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:34.862 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:34.862 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:34.862 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=3152045 00:26:34.862 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:34.862 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 3152045 00:26:34.862 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3152045 ']' 00:26:34.862 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.862 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:34.862 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.862 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:34.862 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:34.862 [2024-11-29 21:56:07.082993] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:34.862 [2024-11-29 21:56:07.083042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.120 [2024-11-29 21:56:07.153496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:35.120 [2024-11-29 21:56:07.193756] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.120 [2024-11-29 21:56:07.193799] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.120 [2024-11-29 21:56:07.193809] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.120 [2024-11-29 21:56:07.193818] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.120 [2024-11-29 21:56:07.193824] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.120 [2024-11-29 21:56:07.193872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.120 [2024-11-29 21:56:07.193955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.120 [2024-11-29 21:56:07.194044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.120 [2024-11-29 21:56:07.194046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.120 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:35.120 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:26:35.120 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:35.120 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:35.120 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:35.120 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.120 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:35.120 21:56:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:38.399 21:56:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:38.399 21:56:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:38.399 21:56:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:26:38.399 21:56:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:38.657 21:56:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:38.657 21:56:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:26:38.657 21:56:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:38.657 21:56:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:26:38.657 21:56:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:26:38.917 [2024-11-29 21:56:10.971779] rdma.c:2737:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:26:38.918 [2024-11-29 21:56:10.994127] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6f1140/0x5e2880) succeed. 00:26:38.918 [2024-11-29 21:56:11.004816] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6f3790/0x623f20) succeed. 00:26:38.918 21:56:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:39.175 21:56:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:39.175 21:56:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:39.434 21:56:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:39.434 21:56:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:39.692 21:56:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:39.692 [2024-11-29 21:56:11.914165] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:39.951 21:56:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:39.951 21:56:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:26:39.951 21:56:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:26:39.951 21:56:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:39.951 21:56:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:26:41.325 Initializing NVMe Controllers 00:26:41.325 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:26:41.325 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:26:41.325 Initialization complete. Launching workers. 00:26:41.325 ======================================================== 00:26:41.325 Latency(us) 00:26:41.325 Device Information : IOPS MiB/s Average min max 00:26:41.325 PCIE (0000:d8:00.0) NSID 1 from core 0: 101567.00 396.75 314.66 43.16 4279.25 00:26:41.325 ======================================================== 00:26:41.325 Total : 101567.00 396.75 314.66 43.16 4279.25 00:26:41.325 00:26:41.325 21:56:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:44.611 Initializing NVMe Controllers 00:26:44.611 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.611 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:44.611 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:44.611 Initialization complete. Launching workers. 00:26:44.611 ======================================================== 00:26:44.611 Latency(us) 00:26:44.611 Device Information : IOPS MiB/s Average min max 00:26:44.611 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6750.99 26.37 147.79 46.20 5027.86 00:26:44.611 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5243.35 20.48 190.35 68.25 5096.17 00:26:44.611 ======================================================== 00:26:44.611 Total : 11994.35 46.85 166.40 46.20 5096.17 00:26:44.611 00:26:44.611 21:56:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:47.898 Initializing NVMe Controllers 00:26:47.898 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:47.898 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:47.898 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:47.898 Initialization complete. Launching workers. 00:26:47.898 ======================================================== 00:26:47.898 Latency(us) 00:26:47.898 Device Information : IOPS MiB/s Average min max 00:26:47.898 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18616.29 72.72 1723.21 486.01 5537.04 00:26:47.898 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4031.85 15.75 7968.71 6413.44 8208.11 00:26:47.898 ======================================================== 00:26:47.898 Total : 22648.14 88.47 2835.04 486.01 8208.11 00:26:47.898 00:26:48.158 21:56:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:26:48.158 21:56:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:52.348 Initializing NVMe Controllers 00:26:52.349 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.349 Controller IO queue size 128, less than required. 00:26:52.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.349 Controller IO queue size 128, less than required. 00:26:52.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.349 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:52.349 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:52.349 Initialization complete. Launching workers. 00:26:52.349 ======================================================== 00:26:52.349 Latency(us) 00:26:52.349 Device Information : IOPS MiB/s Average min max 00:26:52.349 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4011.50 1002.88 32076.47 13794.18 86048.19 00:26:52.349 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4033.00 1008.25 31436.69 13920.07 77461.96 00:26:52.349 ======================================================== 00:26:52.349 Total : 8044.50 2011.12 31755.73 13794.18 86048.19 00:26:52.349 00:26:52.349 21:56:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:26:52.917 No valid NVMe controllers or AIO or URING devices found 00:26:52.917 Initializing NVMe Controllers 00:26:52.917 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.917 Controller IO queue size 128, less than required. 00:26:52.917 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.917 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:52.917 Controller IO queue size 128, less than required. 00:26:52.917 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.917 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:52.917 WARNING: Some requested NVMe devices were skipped 00:26:52.917 21:56:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:26:57.105 Initializing NVMe Controllers 00:26:57.105 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:57.105 Controller IO queue size 128, less than required. 00:26:57.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.105 Controller IO queue size 128, less than required. 00:26:57.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.105 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:57.105 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:57.105 Initialization complete. Launching workers. 00:26:57.105 00:26:57.105 ==================== 00:26:57.105 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:57.105 RDMA transport: 00:26:57.105 dev name: mlx5_0 00:26:57.105 polls: 406493 00:26:57.105 idle_polls: 402746 00:26:57.105 completions: 45586 00:26:57.105 queued_requests: 1 00:26:57.105 total_send_wrs: 22793 00:26:57.105 send_doorbell_updates: 3513 00:26:57.105 total_recv_wrs: 22920 00:26:57.105 recv_doorbell_updates: 3515 00:26:57.105 --------------------------------- 00:26:57.105 00:26:57.105 ==================== 00:26:57.105 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:57.105 RDMA transport: 00:26:57.105 dev name: mlx5_0 00:26:57.105 polls: 410305 00:26:57.105 idle_polls: 410027 00:26:57.105 completions: 20310 00:26:57.105 queued_requests: 1 00:26:57.105 total_send_wrs: 10155 00:26:57.105 send_doorbell_updates: 254 00:26:57.105 total_recv_wrs: 10282 00:26:57.105 recv_doorbell_updates: 255 00:26:57.105 --------------------------------- 00:26:57.105 ======================================================== 00:26:57.105 Latency(us) 00:26:57.105 Device Information : IOPS MiB/s Average min max 00:26:57.105 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5688.68 1422.17 22495.39 10740.36 68684.55 00:26:57.105 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2534.35 633.59 50401.55 31636.09 74955.41 00:26:57.105 ======================================================== 00:26:57.105 Total : 8223.02 2055.76 31096.10 10740.36 74955.41 00:26:57.105 00:26:57.105 21:56:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:57.105 21:56:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:57.364 21:56:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:57.364 21:56:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:26:57.364 21:56:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:03.930 21:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=b2e82004-5b84-4b39-bfc2-70274e064f14 00:27:03.930 21:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb b2e82004-5b84-4b39-bfc2-70274e064f14 00:27:03.930 21:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=b2e82004-5b84-4b39-bfc2-70274e064f14 00:27:03.930 21:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:03.930 21:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:03.930 21:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:03.930 21:56:35 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:03.930 21:56:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:03.930 { 00:27:03.930 "uuid": "b2e82004-5b84-4b39-bfc2-70274e064f14", 00:27:03.930 "name": "lvs_0", 00:27:03.930 "base_bdev": "Nvme0n1", 00:27:03.930 "total_data_clusters": 476466, 00:27:03.930 "free_clusters": 476466, 00:27:03.930 "block_size": 512, 00:27:03.930 "cluster_size": 4194304 00:27:03.930 } 00:27:03.930 ]' 00:27:03.930 21:56:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="b2e82004-5b84-4b39-bfc2-70274e064f14") .free_clusters' 00:27:03.930 21:56:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=476466 00:27:03.930 21:56:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="b2e82004-5b84-4b39-bfc2-70274e064f14") .cluster_size' 00:27:04.188 21:56:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:04.189 21:56:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1905864 00:27:04.189 21:56:36 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1905864 00:27:04.189 1905864 00:27:04.189 21:56:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:27:04.189 21:56:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:27:04.189 21:56:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b2e82004-5b84-4b39-bfc2-70274e064f14 lbd_0 20480 00:27:04.756 21:56:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=89db0140-f131-4e72-a14e-85bf5cfe0c07 00:27:04.756 21:56:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 89db0140-f131-4e72-a14e-85bf5cfe0c07 lvs_n_0 00:27:06.133 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=adaf7edb-8190-4642-91cd-9392f09745e5 00:27:06.133 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb adaf7edb-8190-4642-91cd-9392f09745e5 00:27:06.133 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=adaf7edb-8190-4642-91cd-9392f09745e5 00:27:06.133 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:06.133 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:06.133 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:06.133 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:06.133 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:06.133 { 00:27:06.133 "uuid": "b2e82004-5b84-4b39-bfc2-70274e064f14", 00:27:06.133 "name": "lvs_0", 00:27:06.133 "base_bdev": "Nvme0n1", 00:27:06.133 "total_data_clusters": 476466, 00:27:06.133 "free_clusters": 471346, 00:27:06.133 "block_size": 512, 00:27:06.133 "cluster_size": 4194304 00:27:06.133 }, 00:27:06.133 { 00:27:06.133 "uuid": "adaf7edb-8190-4642-91cd-9392f09745e5", 00:27:06.133 "name": "lvs_n_0", 00:27:06.133 "base_bdev": "89db0140-f131-4e72-a14e-85bf5cfe0c07", 00:27:06.133 "total_data_clusters": 5114, 00:27:06.133 "free_clusters": 5114, 00:27:06.133 "block_size": 512, 00:27:06.133 "cluster_size": 4194304 00:27:06.133 } 00:27:06.133 ]' 00:27:06.133 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="adaf7edb-8190-4642-91cd-9392f09745e5") .free_clusters' 00:27:06.133 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:27:06.134 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="adaf7edb-8190-4642-91cd-9392f09745e5") .cluster_size' 00:27:06.134 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:06.134 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:27:06.134 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:27:06.134 20456 00:27:06.134 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:06.134 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u adaf7edb-8190-4642-91cd-9392f09745e5 lbd_nest_0 20456 00:27:06.393 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=8d2c198a-a543-411e-b95a-612694f1b841 00:27:06.393 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.653 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:06.653 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 8d2c198a-a543-411e-b95a-612694f1b841 00:27:06.912 21:56:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:06.912 21:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:06.912 21:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:06.912 21:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:06.912 21:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:06.912 21:56:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:19.112 Initializing NVMe Controllers 00:27:19.112 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:19.112 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:19.112 Initialization complete. Launching workers. 00:27:19.113 ======================================================== 00:27:19.113 Latency(us) 00:27:19.113 Device Information : IOPS MiB/s Average min max 00:27:19.113 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5858.80 2.86 170.24 68.22 7223.38 00:27:19.113 ======================================================== 00:27:19.113 Total : 5858.80 2.86 170.24 68.22 7223.38 00:27:19.113 00:27:19.113 21:56:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:19.113 21:56:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:31.318 Initializing NVMe Controllers 00:27:31.318 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:31.318 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:31.318 Initialization complete. Launching workers. 00:27:31.318 ======================================================== 00:27:31.318 Latency(us) 00:27:31.318 Device Information : IOPS MiB/s Average min max 00:27:31.318 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2656.71 332.09 375.40 156.19 8101.36 00:27:31.318 ======================================================== 00:27:31.318 Total : 2656.71 332.09 375.40 156.19 8101.36 00:27:31.318 00:27:31.318 21:57:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:31.318 21:57:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:31.318 21:57:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:41.341 Initializing NVMe Controllers 00:27:41.341 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.341 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:41.341 Initialization complete. Launching workers. 00:27:41.341 ======================================================== 00:27:41.341 Latency(us) 00:27:41.341 Device Information : IOPS MiB/s Average min max 00:27:41.341 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11421.90 5.58 2800.55 948.94 7975.18 00:27:41.341 ======================================================== 00:27:41.341 Total : 11421.90 5.58 2800.55 948.94 7975.18 00:27:41.341 00:27:41.341 21:57:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:41.341 21:57:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:53.555 Initializing NVMe Controllers 00:27:53.555 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.555 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:53.555 Initialization complete. Launching workers. 00:27:53.555 ======================================================== 00:27:53.555 Latency(us) 00:27:53.555 Device Information : IOPS MiB/s Average min max 00:27:53.555 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4010.57 501.32 7978.28 5889.41 14902.38 00:27:53.555 ======================================================== 00:27:53.555 Total : 4010.57 501.32 7978.28 5889.41 14902.38 00:27:53.555 00:27:53.555 21:57:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:53.555 21:57:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:53.555 21:57:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:05.766 Initializing NVMe Controllers 00:28:05.766 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.766 Controller IO queue size 128, less than required. 00:28:05.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:05.766 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:05.766 Initialization complete. Launching workers. 00:28:05.766 ======================================================== 00:28:05.766 Latency(us) 00:28:05.766 Device Information : IOPS MiB/s Average min max 00:28:05.766 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18839.10 9.20 6796.39 1898.24 15776.00 00:28:05.766 ======================================================== 00:28:05.766 Total : 18839.10 9.20 6796.39 1898.24 15776.00 00:28:05.766 00:28:05.766 21:57:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:05.766 21:57:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:15.750 Initializing NVMe Controllers 00:28:15.750 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:15.750 Controller IO queue size 128, less than required. 00:28:15.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.750 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:15.750 Initialization complete. Launching workers. 00:28:15.750 ======================================================== 00:28:15.750 Latency(us) 00:28:15.750 Device Information : IOPS MiB/s Average min max 00:28:15.750 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11109.52 1388.69 11520.74 3441.20 24326.98 00:28:15.750 ======================================================== 00:28:15.750 Total : 11109.52 1388.69 11520.74 3441.20 24326.98 00:28:15.750 00:28:15.750 21:57:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:15.750 21:57:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8d2c198a-a543-411e-b95a-612694f1b841 00:28:16.009 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:16.269 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 89db0140-f131-4e72-a14e-85bf5cfe0c07 00:28:16.529 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:16.529 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:16.529 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:16.529 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:16.529 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:28:16.529 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:16.529 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:16.529 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:28:16.529 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.529 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:16.529 rmmod nvme_rdma 00:28:16.529 rmmod nvme_fabrics 00:28:16.789 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.789 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:28:16.789 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:28:16.789 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 3152045 ']' 00:28:16.789 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 3152045 00:28:16.790 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3152045 ']' 00:28:16.790 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3152045 00:28:16.790 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:28:16.790 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:16.790 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3152045 00:28:16.790 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:16.790 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:16.790 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3152045' 00:28:16.790 killing process with pid 3152045 00:28:16.790 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3152045 00:28:16.790 21:57:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3152045 00:28:19.328 21:57:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:19.328 21:57:51 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:28:19.328 00:28:19.328 real 1m51.334s 00:28:19.328 user 6m59.996s 00:28:19.328 sys 0m7.332s 00:28:19.328 21:57:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:19.328 21:57:51 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:19.328 ************************************ 00:28:19.328 END TEST nvmf_perf 00:28:19.328 ************************************ 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.329 ************************************ 00:28:19.329 START TEST nvmf_fio_host 00:28:19.329 ************************************ 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:28:19.329 * Looking for test storage... 00:28:19.329 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:19.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.329 --rc genhtml_branch_coverage=1 00:28:19.329 --rc genhtml_function_coverage=1 00:28:19.329 --rc genhtml_legend=1 00:28:19.329 --rc geninfo_all_blocks=1 00:28:19.329 --rc geninfo_unexecuted_blocks=1 00:28:19.329 00:28:19.329 ' 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:19.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.329 --rc genhtml_branch_coverage=1 00:28:19.329 --rc genhtml_function_coverage=1 00:28:19.329 --rc genhtml_legend=1 00:28:19.329 --rc geninfo_all_blocks=1 00:28:19.329 --rc geninfo_unexecuted_blocks=1 00:28:19.329 00:28:19.329 ' 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:19.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.329 --rc genhtml_branch_coverage=1 00:28:19.329 --rc genhtml_function_coverage=1 00:28:19.329 --rc genhtml_legend=1 00:28:19.329 --rc geninfo_all_blocks=1 00:28:19.329 --rc geninfo_unexecuted_blocks=1 00:28:19.329 00:28:19.329 ' 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:19.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.329 --rc genhtml_branch_coverage=1 00:28:19.329 --rc genhtml_function_coverage=1 00:28:19.329 --rc genhtml_legend=1 00:28:19.329 --rc geninfo_all_blocks=1 00:28:19.329 --rc geninfo_unexecuted_blocks=1 00:28:19.329 00:28:19.329 ' 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:19.329 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:19.330 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.330 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.589 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:19.589 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:19.589 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:19.589 21:57:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:26.164 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:26.165 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:26.165 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:26.165 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:26.165 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # rdma_device_init 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@526 -- # allocate_nic_ips 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:26.165 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:26.165 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:26.165 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:26.165 altname enp217s0f0np0 00:28:26.166 altname ens818f0np0 00:28:26.166 inet 192.168.100.8/24 scope global mlx_0_0 00:28:26.166 valid_lft forever preferred_lft forever 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:26.166 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:26.166 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:26.166 altname enp217s0f1np1 00:28:26.166 altname ens818f1np1 00:28:26.166 inet 192.168.100.9/24 scope global mlx_0_1 00:28:26.166 valid_lft forever preferred_lft forever 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:26.166 21:57:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:28:26.166 192.168.100.9' 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:28:26.166 192.168.100.9' 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # head -n 1 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:28:26.166 192.168.100.9' 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # tail -n +2 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # head -n 1 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3173046 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3173046 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3173046 ']' 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:26.166 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.166 [2024-11-29 21:57:58.139392] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:26.166 [2024-11-29 21:57:58.139443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.166 [2024-11-29 21:57:58.208612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.166 [2024-11-29 21:57:58.248434] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.166 [2024-11-29 21:57:58.248478] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.166 [2024-11-29 21:57:58.248487] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.166 [2024-11-29 21:57:58.248495] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.166 [2024-11-29 21:57:58.248502] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.166 [2024-11-29 21:57:58.248556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.166 [2024-11-29 21:57:58.248652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.166 [2024-11-29 21:57:58.248736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.166 [2024-11-29 21:57:58.248738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.167 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:26.167 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:28:26.167 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:26.426 [2024-11-29 21:57:58.534375] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xec9f50/0xece400) succeed. 00:28:26.426 [2024-11-29 21:57:58.544963] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xecb540/0xf0faa0) succeed. 00:28:26.686 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:26.686 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:26.686 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.686 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:26.686 Malloc1 00:28:26.946 21:57:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:26.946 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:27.205 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:27.464 [2024-11-29 21:57:59.534098] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:27.464 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:27.724 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:27.725 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:27.725 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:27.725 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:27.725 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:27.725 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:27.725 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:27.725 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:27.725 21:57:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:27.983 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:27.983 fio-3.35 00:28:27.983 Starting 1 thread 00:28:30.535 00:28:30.535 test: (groupid=0, jobs=1): err= 0: pid=3173481: Fri Nov 29 21:58:02 2024 00:28:30.535 read: IOPS=18.0k, BW=70.4MiB/s (73.9MB/s)(141MiB/2004msec) 00:28:30.535 slat (nsec): min=1354, max=32289, avg=1496.99, stdev=412.93 00:28:30.535 clat (usec): min=1805, max=6456, avg=3523.95, stdev=81.87 00:28:30.535 lat (usec): min=1823, max=6458, avg=3525.44, stdev=81.79 00:28:30.535 clat percentiles (usec): 00:28:30.535 | 1.00th=[ 3490], 5.00th=[ 3490], 10.00th=[ 3523], 20.00th=[ 3523], 00:28:30.535 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3523], 00:28:30.535 | 70.00th=[ 3523], 80.00th=[ 3523], 90.00th=[ 3556], 95.00th=[ 3556], 00:28:30.535 | 99.00th=[ 3556], 99.50th=[ 3556], 99.90th=[ 4293], 99.95th=[ 5932], 00:28:30.535 | 99.99th=[ 6456] 00:28:30.535 bw ( KiB/s): min=70760, max=73032, per=100.00%, avg=72144.00, stdev=974.45, samples=4 00:28:30.535 iops : min=17690, max=18258, avg=18036.00, stdev=243.61, samples=4 00:28:30.535 write: IOPS=18.1k, BW=70.5MiB/s (74.0MB/s)(141MiB/2004msec); 0 zone resets 00:28:30.535 slat (nsec): min=1395, max=23325, avg=1574.21, stdev=401.99 00:28:30.535 clat (usec): min=1832, max=6431, avg=3522.21, stdev=75.70 00:28:30.535 lat (usec): min=1841, max=6433, avg=3523.78, stdev=75.62 00:28:30.535 clat percentiles (usec): 00:28:30.535 | 1.00th=[ 3490], 5.00th=[ 3490], 10.00th=[ 3490], 20.00th=[ 3523], 00:28:30.535 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3523], 00:28:30.535 | 70.00th=[ 3523], 80.00th=[ 3523], 90.00th=[ 3556], 95.00th=[ 3556], 00:28:30.535 | 99.00th=[ 3556], 99.50th=[ 3556], 99.90th=[ 4621], 99.95th=[ 5538], 00:28:30.535 | 99.99th=[ 5997] 00:28:30.535 bw ( KiB/s): min=70808, max=72872, per=100.00%, avg=72242.00, stdev=964.95, samples=4 00:28:30.535 iops : min=17702, max=18218, avg=18060.50, stdev=241.24, samples=4 00:28:30.535 lat (msec) : 2=0.01%, 4=99.85%, 10=0.14% 00:28:30.535 cpu : usr=99.50%, sys=0.15%, ctx=15, majf=0, minf=2 00:28:30.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:30.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:30.535 issued rwts: total=36136,36184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.535 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:30.535 00:28:30.535 Run status group 0 (all jobs): 00:28:30.535 READ: bw=70.4MiB/s (73.9MB/s), 70.4MiB/s-70.4MiB/s (73.9MB/s-73.9MB/s), io=141MiB (148MB), run=2004-2004msec 00:28:30.535 WRITE: bw=70.5MiB/s (74.0MB/s), 70.5MiB/s-70.5MiB/s (74.0MB/s-74.0MB/s), io=141MiB (148MB), run=2004-2004msec 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:30.535 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:30.536 21:58:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:28:30.794 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:30.794 fio-3.35 00:28:30.794 Starting 1 thread 00:28:33.319 00:28:33.319 test: (groupid=0, jobs=1): err= 0: pid=3174134: Fri Nov 29 21:58:05 2024 00:28:33.319 read: IOPS=14.7k, BW=230MiB/s (241MB/s)(450MiB/1959msec) 00:28:33.319 slat (nsec): min=2261, max=41242, avg=2653.30, stdev=1311.94 00:28:33.319 clat (usec): min=483, max=9969, avg=1556.95, stdev=1262.95 00:28:33.319 lat (usec): min=486, max=9976, avg=1559.60, stdev=1263.55 00:28:33.319 clat percentiles (usec): 00:28:33.319 | 1.00th=[ 668], 5.00th=[ 766], 10.00th=[ 824], 20.00th=[ 898], 00:28:33.319 | 30.00th=[ 971], 40.00th=[ 1045], 50.00th=[ 1156], 60.00th=[ 1254], 00:28:33.319 | 70.00th=[ 1385], 80.00th=[ 1565], 90.00th=[ 3523], 95.00th=[ 4817], 00:28:33.319 | 99.00th=[ 6587], 99.50th=[ 7111], 99.90th=[ 8717], 99.95th=[ 9241], 00:28:33.319 | 99.99th=[ 9896] 00:28:33.319 bw ( KiB/s): min=111009, max=119072, per=48.98%, avg=115224.25, stdev=3304.35, samples=4 00:28:33.319 iops : min= 6938, max= 7442, avg=7201.50, stdev=206.55, samples=4 00:28:33.319 write: IOPS=8310, BW=130MiB/s (136MB/s)(233MiB/1797msec); 0 zone resets 00:28:33.319 slat (usec): min=26, max=145, avg=28.81, stdev= 6.10 00:28:33.319 clat (usec): min=4252, max=21943, avg=12491.25, stdev=1899.02 00:28:33.319 lat (usec): min=4278, max=21972, avg=12520.06, stdev=1898.53 00:28:33.319 clat percentiles (usec): 00:28:33.319 | 1.00th=[ 7111], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10945], 00:28:33.319 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12518], 60.00th=[12911], 00:28:33.319 | 70.00th=[13435], 80.00th=[13960], 90.00th=[14746], 95.00th=[15401], 00:28:33.319 | 99.00th=[17433], 99.50th=[18482], 99.90th=[20579], 99.95th=[21627], 00:28:33.319 | 99.99th=[21890] 00:28:33.319 bw ( KiB/s): min=112542, max=124960, per=89.81%, avg=119415.50, stdev=5225.11, samples=4 00:28:33.319 iops : min= 7033, max= 7810, avg=7463.25, stdev=326.95, samples=4 00:28:33.319 lat (usec) : 500=0.01%, 750=2.75%, 1000=20.04% 00:28:33.319 lat (msec) : 2=35.16%, 4=1.91%, 10=8.39%, 20=31.69%, 50=0.05% 00:28:33.319 cpu : usr=96.86%, sys=1.39%, ctx=237, majf=0, minf=2 00:28:33.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:28:33.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:33.319 issued rwts: total=28801,14934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:33.319 00:28:33.319 Run status group 0 (all jobs): 00:28:33.319 READ: bw=230MiB/s (241MB/s), 230MiB/s-230MiB/s (241MB/s-241MB/s), io=450MiB (472MB), run=1959-1959msec 00:28:33.319 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=233MiB (245MB), run=1797-1797msec 00:28:33.319 21:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:33.320 21:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:33.320 21:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:33.320 21:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:33.320 21:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:28:33.320 21:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:28:33.320 21:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:33.320 21:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:33.320 21:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:28:33.320 21:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:28:33.320 21:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:28:33.320 21:58:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:28:36.592 Nvme0n1 00:28:36.592 21:58:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:41.926 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=5f74f7cd-06e6-44c0-b25a-21803767c9cc 00:28:41.926 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 5f74f7cd-06e6-44c0-b25a-21803767c9cc 00:28:41.926 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=5f74f7cd-06e6-44c0-b25a-21803767c9cc 00:28:41.926 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:41.926 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:28:41.926 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:28:41.926 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:42.183 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:42.183 { 00:28:42.183 "uuid": "5f74f7cd-06e6-44c0-b25a-21803767c9cc", 00:28:42.183 "name": "lvs_0", 00:28:42.183 "base_bdev": "Nvme0n1", 00:28:42.183 "total_data_clusters": 1862, 00:28:42.183 "free_clusters": 1862, 00:28:42.183 "block_size": 512, 00:28:42.183 "cluster_size": 1073741824 00:28:42.183 } 00:28:42.183 ]' 00:28:42.183 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5f74f7cd-06e6-44c0-b25a-21803767c9cc") .free_clusters' 00:28:42.183 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1862 00:28:42.183 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5f74f7cd-06e6-44c0-b25a-21803767c9cc") .cluster_size' 00:28:42.183 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:28:42.183 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1906688 00:28:42.183 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1906688 00:28:42.183 1906688 00:28:42.183 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:28:42.861 33769e1f-6395-4a5a-aed8-4e9cc0397334 00:28:42.861 21:58:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:42.861 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:43.129 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:28:43.385 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:43.386 21:58:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:43.644 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:43.644 fio-3.35 00:28:43.644 Starting 1 thread 00:28:46.169 00:28:46.169 test: (groupid=0, jobs=1): err= 0: pid=3176428: Fri Nov 29 21:58:18 2024 00:28:46.169 read: IOPS=9969, BW=38.9MiB/s (40.8MB/s)(78.1MiB/2005msec) 00:28:46.169 slat (nsec): min=1357, max=20940, avg=1457.76, stdev=275.61 00:28:46.169 clat (usec): min=169, max=332398, avg=6377.30, stdev=18580.50 00:28:46.169 lat (usec): min=170, max=332401, avg=6378.76, stdev=18580.52 00:28:46.169 clat percentiles (msec): 00:28:46.169 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:28:46.169 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:28:46.169 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:28:46.169 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:28:46.169 | 99.99th=[ 334] 00:28:46.169 bw ( KiB/s): min=15184, max=48240, per=99.89%, avg=39832.00, stdev=16433.25, samples=4 00:28:46.169 iops : min= 3796, max=12060, avg=9958.00, stdev=4108.31, samples=4 00:28:46.169 write: IOPS=9981, BW=39.0MiB/s (40.9MB/s)(78.2MiB/2005msec); 0 zone resets 00:28:46.169 slat (nsec): min=1387, max=11987, avg=1524.01, stdev=180.35 00:28:46.169 clat (usec): min=144, max=332690, avg=6341.56, stdev=18059.69 00:28:46.169 lat (usec): min=145, max=332693, avg=6343.09, stdev=18059.73 00:28:46.169 clat percentiles (msec): 00:28:46.169 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:28:46.169 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:28:46.169 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:28:46.169 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:28:46.169 | 99.99th=[ 334] 00:28:46.169 bw ( KiB/s): min=15840, max=47992, per=99.97%, avg=39914.00, stdev=16049.46, samples=4 00:28:46.169 iops : min= 3960, max=11998, avg=9978.50, stdev=4012.37, samples=4 00:28:46.169 lat (usec) : 250=0.02%, 500=0.01%, 750=0.01%, 1000=0.02% 00:28:46.169 lat (msec) : 2=0.04%, 4=0.28%, 10=99.31%, 500=0.32% 00:28:46.169 cpu : usr=99.60%, sys=0.05%, ctx=16, majf=0, minf=2 00:28:46.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:46.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:46.169 issued rwts: total=19988,20012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:46.169 00:28:46.169 Run status group 0 (all jobs): 00:28:46.169 READ: bw=38.9MiB/s (40.8MB/s), 38.9MiB/s-38.9MiB/s (40.8MB/s-40.8MB/s), io=78.1MiB (81.9MB), run=2005-2005msec 00:28:46.169 WRITE: bw=39.0MiB/s (40.9MB/s), 39.0MiB/s-39.0MiB/s (40.9MB/s-40.9MB/s), io=78.2MiB (82.0MB), run=2005-2005msec 00:28:46.169 21:58:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:46.426 21:58:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=031eeb2d-f45a-4f63-8cc9-76014042872d 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 031eeb2d-f45a-4f63-8cc9-76014042872d 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=031eeb2d-f45a-4f63-8cc9-76014042872d 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:47.797 { 00:28:47.797 "uuid": "5f74f7cd-06e6-44c0-b25a-21803767c9cc", 00:28:47.797 "name": "lvs_0", 00:28:47.797 "base_bdev": "Nvme0n1", 00:28:47.797 "total_data_clusters": 1862, 00:28:47.797 "free_clusters": 0, 00:28:47.797 "block_size": 512, 00:28:47.797 "cluster_size": 1073741824 00:28:47.797 }, 00:28:47.797 { 00:28:47.797 "uuid": "031eeb2d-f45a-4f63-8cc9-76014042872d", 00:28:47.797 "name": "lvs_n_0", 00:28:47.797 "base_bdev": "33769e1f-6395-4a5a-aed8-4e9cc0397334", 00:28:47.797 "total_data_clusters": 476206, 00:28:47.797 "free_clusters": 476206, 00:28:47.797 "block_size": 512, 00:28:47.797 "cluster_size": 4194304 00:28:47.797 } 00:28:47.797 ]' 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="031eeb2d-f45a-4f63-8cc9-76014042872d") .free_clusters' 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=476206 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="031eeb2d-f45a-4f63-8cc9-76014042872d") .cluster_size' 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1904824 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1904824 00:28:47.797 1904824 00:28:47.797 21:58:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:28:48.728 d8c06e27-49c1-4863-b283-8d536257039b 00:28:48.728 21:58:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:48.985 21:58:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:28:48.985 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:49.242 21:58:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:28:49.499 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:49.499 fio-3.35 00:28:49.499 Starting 1 thread 00:28:52.023 00:28:52.023 test: (groupid=0, jobs=1): err= 0: pid=3177497: Fri Nov 29 21:58:24 2024 00:28:52.023 read: IOPS=10.1k, BW=39.5MiB/s (41.4MB/s)(79.2MiB/2005msec) 00:28:52.023 slat (nsec): min=1366, max=17772, avg=1511.04, stdev=268.69 00:28:52.023 clat (usec): min=2610, max=10580, avg=6264.65, stdev=177.33 00:28:52.023 lat (usec): min=2613, max=10582, avg=6266.16, stdev=177.30 00:28:52.023 clat percentiles (usec): 00:28:52.023 | 1.00th=[ 6128], 5.00th=[ 6194], 10.00th=[ 6194], 20.00th=[ 6259], 00:28:52.023 | 30.00th=[ 6259], 40.00th=[ 6259], 50.00th=[ 6259], 60.00th=[ 6259], 00:28:52.023 | 70.00th=[ 6259], 80.00th=[ 6325], 90.00th=[ 6325], 95.00th=[ 6325], 00:28:52.023 | 99.00th=[ 6390], 99.50th=[ 6456], 99.90th=[ 8979], 99.95th=[10159], 00:28:52.023 | 99.99th=[10552] 00:28:52.023 bw ( KiB/s): min=39224, max=41112, per=99.98%, avg=40418.00, stdev=834.95, samples=4 00:28:52.023 iops : min= 9806, max=10278, avg=10104.50, stdev=208.74, samples=4 00:28:52.023 write: IOPS=10.1k, BW=39.5MiB/s (41.4MB/s)(79.2MiB/2005msec); 0 zone resets 00:28:52.023 slat (nsec): min=1396, max=16968, avg=1585.17, stdev=219.41 00:28:52.023 clat (usec): min=2605, max=10587, avg=6287.22, stdev=182.78 00:28:52.023 lat (usec): min=2609, max=10588, avg=6288.80, stdev=182.75 00:28:52.023 clat percentiles (usec): 00:28:52.023 | 1.00th=[ 6194], 5.00th=[ 6259], 10.00th=[ 6259], 20.00th=[ 6259], 00:28:52.023 | 30.00th=[ 6259], 40.00th=[ 6259], 50.00th=[ 6259], 60.00th=[ 6325], 00:28:52.024 | 70.00th=[ 6325], 80.00th=[ 6325], 90.00th=[ 6325], 95.00th=[ 6325], 00:28:52.024 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 8979], 99.95th=[10421], 00:28:52.024 | 99.99th=[10552] 00:28:52.024 bw ( KiB/s): min=39672, max=40752, per=99.89%, avg=40428.00, stdev=512.10, samples=4 00:28:52.024 iops : min= 9918, max=10188, avg=10107.00, stdev=128.03, samples=4 00:28:52.024 lat (msec) : 4=0.01%, 10=99.91%, 20=0.08% 00:28:52.024 cpu : usr=99.65%, sys=0.00%, ctx=15, majf=0, minf=2 00:28:52.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:52.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:52.024 issued rwts: total=20264,20286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:52.024 00:28:52.024 Run status group 0 (all jobs): 00:28:52.024 READ: bw=39.5MiB/s (41.4MB/s), 39.5MiB/s-39.5MiB/s (41.4MB/s-41.4MB/s), io=79.2MiB (83.0MB), run=2005-2005msec 00:28:52.024 WRITE: bw=39.5MiB/s (41.4MB/s), 39.5MiB/s-39.5MiB/s (41.4MB/s-41.4MB/s), io=79.2MiB (83.1MB), run=2005-2005msec 00:28:52.024 21:58:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:52.280 21:58:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:28:52.280 21:58:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:00.375 21:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:00.375 21:58:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:05.636 21:58:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:05.636 21:58:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:08.918 rmmod nvme_rdma 00:29:08.918 rmmod nvme_fabrics 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 3173046 ']' 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 3173046 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3173046 ']' 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3173046 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3173046 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3173046' 00:29:08.918 killing process with pid 3173046 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3173046 00:29:08.918 21:58:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3173046 00:29:08.918 21:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:08.918 21:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:29:08.918 00:29:08.918 real 0m49.758s 00:29:08.918 user 3m36.934s 00:29:08.918 sys 0m7.544s 00:29:08.918 21:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:08.918 21:58:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.918 ************************************ 00:29:08.918 END TEST nvmf_fio_host 00:29:08.918 ************************************ 00:29:08.918 21:58:41 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:29:08.918 21:58:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:08.918 21:58:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:08.918 21:58:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.178 ************************************ 00:29:09.178 START TEST nvmf_failover 00:29:09.178 ************************************ 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:29:09.178 * Looking for test storage... 00:29:09.178 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:09.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.178 --rc genhtml_branch_coverage=1 00:29:09.178 --rc genhtml_function_coverage=1 00:29:09.178 --rc genhtml_legend=1 00:29:09.178 --rc geninfo_all_blocks=1 00:29:09.178 --rc geninfo_unexecuted_blocks=1 00:29:09.178 00:29:09.178 ' 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:09.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.178 --rc genhtml_branch_coverage=1 00:29:09.178 --rc genhtml_function_coverage=1 00:29:09.178 --rc genhtml_legend=1 00:29:09.178 --rc geninfo_all_blocks=1 00:29:09.178 --rc geninfo_unexecuted_blocks=1 00:29:09.178 00:29:09.178 ' 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:09.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.178 --rc genhtml_branch_coverage=1 00:29:09.178 --rc genhtml_function_coverage=1 00:29:09.178 --rc genhtml_legend=1 00:29:09.178 --rc geninfo_all_blocks=1 00:29:09.178 --rc geninfo_unexecuted_blocks=1 00:29:09.178 00:29:09.178 ' 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:09.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.178 --rc genhtml_branch_coverage=1 00:29:09.178 --rc genhtml_function_coverage=1 00:29:09.178 --rc genhtml_legend=1 00:29:09.178 --rc geninfo_all_blocks=1 00:29:09.178 --rc geninfo_unexecuted_blocks=1 00:29:09.178 00:29:09.178 ' 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:09.178 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:09.179 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.179 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.438 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:09.438 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:09.438 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:29:09.438 21:58:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.998 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:15.999 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:15.999 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:15.999 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:15.999 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # rdma_device_init 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@526 -- # allocate_nic_ips 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:15.999 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:15.999 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:15.999 altname enp217s0f0np0 00:29:15.999 altname ens818f0np0 00:29:15.999 inet 192.168.100.8/24 scope global mlx_0_0 00:29:15.999 valid_lft forever preferred_lft forever 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:15.999 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:15.999 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:15.999 altname enp217s0f1np1 00:29:15.999 altname ens818f1np1 00:29:15.999 inet 192.168.100.9/24 scope global mlx_0_1 00:29:15.999 valid_lft forever preferred_lft forever 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:15.999 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:29:16.000 192.168.100.9' 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:29:16.000 192.168.100.9' 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # head -n 1 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:29:16.000 192.168.100.9' 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # tail -n +2 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # head -n 1 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=3183930 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 3183930 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3183930 ']' 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:16.000 21:58:47 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:16.000 [2024-11-29 21:58:47.879677] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:16.000 [2024-11-29 21:58:47.879731] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.000 [2024-11-29 21:58:47.949740] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:16.000 [2024-11-29 21:58:47.988433] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.000 [2024-11-29 21:58:47.988478] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.000 [2024-11-29 21:58:47.988487] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.000 [2024-11-29 21:58:47.988495] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.000 [2024-11-29 21:58:47.988502] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.000 [2024-11-29 21:58:47.988606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.000 [2024-11-29 21:58:47.988710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:16.000 [2024-11-29 21:58:47.988712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.000 21:58:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:16.000 21:58:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:16.000 21:58:48 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:16.000 21:58:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:16.000 21:58:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:16.000 21:58:48 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.000 21:58:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:16.258 [2024-11-29 21:58:48.317803] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1be4710/0x1be8bc0) succeed. 00:29:16.258 [2024-11-29 21:58:48.328561] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1be5c60/0x1c2a260) succeed. 00:29:16.258 21:58:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:16.515 Malloc0 00:29:16.515 21:58:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:16.791 21:58:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:16.791 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:17.047 [2024-11-29 21:58:49.209220] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:17.047 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:29:17.304 [2024-11-29 21:58:49.417616] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:17.304 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:29:17.562 [2024-11-29 21:58:49.614351] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:29:17.562 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3184255 00:29:17.562 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:17.562 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:17.562 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3184255 /var/tmp/bdevperf.sock 00:29:17.562 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3184255 ']' 00:29:17.562 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:17.562 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:17.562 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:17.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:17.562 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:17.562 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:17.820 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:17.820 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:17.820 21:58:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:18.077 NVMe0n1 00:29:18.077 21:58:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:18.335 00:29:18.335 21:58:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:18.335 21:58:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3184273 00:29:18.335 21:58:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:19.269 21:58:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:19.528 21:58:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:22.811 21:58:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:22.811 00:29:22.811 21:58:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:29:23.068 21:58:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:26.346 21:58:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:26.346 [2024-11-29 21:58:58.295156] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:26.346 21:58:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:27.280 21:58:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:29:27.280 21:58:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3184273 00:29:33.865 { 00:29:33.865 "results": [ 00:29:33.865 { 00:29:33.865 "job": "NVMe0n1", 00:29:33.865 "core_mask": "0x1", 00:29:33.865 "workload": "verify", 00:29:33.865 "status": "finished", 00:29:33.865 "verify_range": { 00:29:33.865 "start": 0, 00:29:33.865 "length": 16384 00:29:33.865 }, 00:29:33.865 "queue_depth": 128, 00:29:33.865 "io_size": 4096, 00:29:33.865 "runtime": 15.006047, 00:29:33.865 "iops": 14513.415824967095, 00:29:33.865 "mibps": 56.69303056627771, 00:29:33.865 "io_failed": 4628, 00:29:33.865 "io_timeout": 0, 00:29:33.865 "avg_latency_us": 8614.259503113522, 00:29:33.865 "min_latency_us": 352.256, 00:29:33.865 "max_latency_us": 1046898.2784 00:29:33.865 } 00:29:33.865 ], 00:29:33.865 "core_count": 1 00:29:33.865 } 00:29:33.865 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3184255 00:29:33.865 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3184255 ']' 00:29:33.865 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3184255 00:29:33.865 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:33.865 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:33.865 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3184255 00:29:33.865 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:33.865 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:33.865 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3184255' 00:29:33.865 killing process with pid 3184255 00:29:33.865 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3184255 00:29:33.865 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3184255 00:29:33.865 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:33.865 [2024-11-29 21:58:49.693630] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:33.865 [2024-11-29 21:58:49.693705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3184255 ] 00:29:33.865 [2024-11-29 21:58:49.765322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.865 [2024-11-29 21:58:49.804654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.865 Running I/O for 15 seconds... 00:29:33.865 18432.00 IOPS, 72.00 MiB/s [2024-11-29T20:59:06.113Z] 10048.00 IOPS, 39.25 MiB/s [2024-11-29T20:59:06.113Z] [2024-11-29 21:58:52.626789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.626828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.626845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.626855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.626866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.626875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.626886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.626895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.626905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.626914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.626925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.626933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.626944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.626952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.626962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.626971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.626981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.626990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.627000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.627009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.627019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.627033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.627044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.627053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.627063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.627072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.627082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.627091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.865 [2024-11-29 21:58:52.627101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.865 [2024-11-29 21:58:52.627110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.866 [2024-11-29 21:58:52.627379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.866 [2024-11-29 21:58:52.627818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x180c00 00:29:33.866 [2024-11-29 21:58:52.627827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.627837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.627846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.627857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.627865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.627876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.627884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.627895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.627903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.627914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.627923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.627936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.627945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.627955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.627963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.627974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.627983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.627993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x180c00 00:29:33.867 [2024-11-29 21:58:52.628526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.867 [2024-11-29 21:58:52.628536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.628985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.628994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.629004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.629012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.629024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.629033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.629043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.629052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.629062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.629071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.629081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.629090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.629100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.629109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.629119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.629128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.629138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.629146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.629159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.629167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.629178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.629186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.629196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.638814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.638828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.638838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.868 [2024-11-29 21:58:52.638850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x180c00 00:29:33.868 [2024-11-29 21:58:52.638859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:52.638870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x180c00 00:29:33.869 [2024-11-29 21:58:52.638878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:52.638889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x180c00 00:29:33.869 [2024-11-29 21:58:52.638897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58724 cdw0:21274000 sqhd:6f68 p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:52.640682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.869 [2024-11-29 21:58:52.640695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.869 [2024-11-29 21:58:52.640703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30480 len:8 PRP1 0x0 PRP2 0x0 00:29:33.869 [2024-11-29 21:58:52.640713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:52.640752] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae4900 was disconnected and freed. reset controller. 00:29:33.869 [2024-11-29 21:58:52.640763] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:29:33.869 [2024-11-29 21:58:52.640774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.869 [2024-11-29 21:58:52.640810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.869 [2024-11-29 21:58:52.640822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58724 cdw0:b83a00 sqhd:339e p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:52.640832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.869 [2024-11-29 21:58:52.640840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58724 cdw0:b83a00 sqhd:339e p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:52.640850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.869 [2024-11-29 21:58:52.640858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58724 cdw0:b83a00 sqhd:339e p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:52.640867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.869 [2024-11-29 21:58:52.640876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58724 cdw0:b83a00 sqhd:339e p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:52.659486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:33.869 [2024-11-29 21:58:52.659504] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:33.869 [2024-11-29 21:58:52.659513] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:33.869 [2024-11-29 21:58:52.662241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.869 [2024-11-29 21:58:52.702949] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:33.869 11763.33 IOPS, 45.95 MiB/s [2024-11-29T20:59:06.117Z] 13398.50 IOPS, 52.34 MiB/s [2024-11-29T20:59:06.117Z] 12715.60 IOPS, 49.67 MiB/s [2024-11-29T20:59:06.117Z] [2024-11-29 21:58:56.101237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.869 [2024-11-29 21:58:56.101278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.869 [2024-11-29 21:58:56.101305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.869 [2024-11-29 21:58:56.101324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.869 [2024-11-29 21:58:56.101344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.869 [2024-11-29 21:58:56.101363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182600 00:29:33.869 [2024-11-29 21:58:56.101383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182600 00:29:33.869 [2024-11-29 21:58:56.101403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182600 00:29:33.869 [2024-11-29 21:58:56.101422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182600 00:29:33.869 [2024-11-29 21:58:56.101441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182600 00:29:33.869 [2024-11-29 21:58:56.101460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182600 00:29:33.869 [2024-11-29 21:58:56.101480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182600 00:29:33.869 [2024-11-29 21:58:56.101506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182600 00:29:33.869 [2024-11-29 21:58:56.101526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.869 [2024-11-29 21:58:56.101545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.869 [2024-11-29 21:58:56.101565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.869 [2024-11-29 21:58:56.101583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.869 [2024-11-29 21:58:56.101603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.869 [2024-11-29 21:58:56.101625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.869 [2024-11-29 21:58:56.101635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.869 [2024-11-29 21:58:56.101644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.101663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.101686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.101706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.101726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.101747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.101767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.101786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.101805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.101825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.101844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.101863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.101882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.101900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.101919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.101939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.101958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.101978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.101988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.101997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.102016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.102035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.102054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.102073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.102092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182600 00:29:33.870 [2024-11-29 21:58:56.102111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.870 [2024-11-29 21:58:56.102370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.870 [2024-11-29 21:58:56.102379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.102862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.102988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.102997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.103007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.103016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.103026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.871 [2024-11-29 21:58:56.103034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.103045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.103055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.103066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.103076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.103088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182600 00:29:33.871 [2024-11-29 21:58:56.103098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.871 [2024-11-29 21:58:56.103111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.872 [2024-11-29 21:58:56.103229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.872 [2024-11-29 21:58:56.103250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.872 [2024-11-29 21:58:56.103269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.872 [2024-11-29 21:58:56.103289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.872 [2024-11-29 21:58:56.103309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.872 [2024-11-29 21:58:56.103328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.872 [2024-11-29 21:58:56.103349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.872 [2024-11-29 21:58:56.103368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.872 [2024-11-29 21:58:56.103701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.872 [2024-11-29 21:58:56.103720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.103750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182600 00:29:33.872 [2024-11-29 21:58:56.103759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58726 cdw0:21274000 sqhd:63bc p:1 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.105541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.872 [2024-11-29 21:58:56.105555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.872 [2024-11-29 21:58:56.105563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126344 len:8 PRP1 0x0 PRP2 0x0 00:29:33.872 [2024-11-29 21:58:56.105573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.872 [2024-11-29 21:58:56.105620] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae4840 was disconnected and freed. reset controller. 00:29:33.872 [2024-11-29 21:58:56.105631] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:29:33.872 [2024-11-29 21:58:56.105642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.872 [2024-11-29 21:58:56.108366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.872 [2024-11-29 21:58:56.123732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:33.872 [2024-11-29 21:58:56.171384] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:33.872 11735.00 IOPS, 45.84 MiB/s [2024-11-29T20:59:06.120Z] 12695.57 IOPS, 49.59 MiB/s [2024-11-29T20:59:06.120Z] 13422.75 IOPS, 52.43 MiB/s [2024-11-29T20:59:06.121Z] 13882.11 IOPS, 54.23 MiB/s [2024-11-29T20:59:06.121Z] [2024-11-29 21:59:00.502423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.502981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.502990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.503001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.503010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.503021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x180c00 00:29:33.873 [2024-11-29 21:59:00.503030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.503041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x180c00 00:29:33.873 [2024-11-29 21:59:00.503050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.503061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x180c00 00:29:33.873 [2024-11-29 21:59:00.503070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.503081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x180c00 00:29:33.873 [2024-11-29 21:59:00.503090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.503100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x180c00 00:29:33.873 [2024-11-29 21:59:00.503109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.503120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x180c00 00:29:33.873 [2024-11-29 21:59:00.503129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.503143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x180c00 00:29:33.873 [2024-11-29 21:59:00.503152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.503163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.503172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.503182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.503192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.503202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.503212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.873 [2024-11-29 21:59:00.503223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.873 [2024-11-29 21:59:00.503232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x180c00 00:29:33.874 [2024-11-29 21:59:00.503331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x180c00 00:29:33.874 [2024-11-29 21:59:00.503350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x180c00 00:29:33.874 [2024-11-29 21:59:00.503371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x180c00 00:29:33.874 [2024-11-29 21:59:00.503392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x180c00 00:29:33.874 [2024-11-29 21:59:00.503412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x180c00 00:29:33.874 [2024-11-29 21:59:00.503432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x180c00 00:29:33.874 [2024-11-29 21:59:00.503452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x180c00 00:29:33.874 [2024-11-29 21:59:00.503472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x180c00 00:29:33.874 [2024-11-29 21:59:00.503651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.874 [2024-11-29 21:59:00.503820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x180c00 00:29:33.874 [2024-11-29 21:59:00.503839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x180c00 00:29:33.874 [2024-11-29 21:59:00.503861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x180c00 00:29:33.874 [2024-11-29 21:59:00.503881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.874 [2024-11-29 21:59:00.503891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.503900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.503911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.503920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.503931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.503940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.503950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.503959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.503969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.503978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.503989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.503997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.875 [2024-11-29 21:59:00.504268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x180c00 00:29:33.875 [2024-11-29 21:59:00.504607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.875 [2024-11-29 21:59:00.504617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.876 [2024-11-29 21:59:00.504771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.504978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.504987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.505000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x180c00 00:29:33.876 [2024-11-29 21:59:00.505009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58728 cdw0:21274000 sqhd:8424 p:1 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.506820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.876 [2024-11-29 21:59:00.506836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.876 [2024-11-29 21:59:00.506845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105952 len:8 PRP1 0x0 PRP2 0x0 00:29:33.876 [2024-11-29 21:59:00.506855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.876 [2024-11-29 21:59:00.506898] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae4840 was disconnected and freed. reset controller. 00:29:33.876 [2024-11-29 21:59:00.506910] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:29:33.876 [2024-11-29 21:59:00.506921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.876 [2024-11-29 21:59:00.509663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.876 [2024-11-29 21:59:00.524589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:33.876 12493.90 IOPS, 48.80 MiB/s [2024-11-29T20:59:06.124Z] [2024-11-29 21:59:00.572522] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:33.876 13015.73 IOPS, 50.84 MiB/s [2024-11-29T20:59:06.124Z] 13481.00 IOPS, 52.66 MiB/s [2024-11-29T20:59:06.124Z] 13879.00 IOPS, 54.21 MiB/s [2024-11-29T20:59:06.124Z] 14216.43 IOPS, 55.53 MiB/s [2024-11-29T20:59:06.124Z] 14512.00 IOPS, 56.69 MiB/s 00:29:33.876 Latency(us) 00:29:33.876 [2024-11-29T20:59:06.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.876 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:33.876 Verification LBA range: start 0x0 length 0x4000 00:29:33.876 NVMe0n1 : 15.01 14513.42 56.69 308.41 0.00 8614.26 352.26 1046898.28 00:29:33.876 [2024-11-29T20:59:06.124Z] =================================================================================================================== 00:29:33.876 [2024-11-29T20:59:06.124Z] Total : 14513.42 56.69 308.41 0.00 8614.26 352.26 1046898.28 00:29:33.876 Received shutdown signal, test time was about 15.000000 seconds 00:29:33.876 00:29:33.876 Latency(us) 00:29:33.876 [2024-11-29T20:59:06.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.876 [2024-11-29T20:59:06.124Z] =================================================================================================================== 00:29:33.876 [2024-11-29T20:59:06.124Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:33.876 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:33.876 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:33.876 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:33.876 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3186921 00:29:33.876 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:33.876 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3186921 /var/tmp/bdevperf.sock 00:29:33.876 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3186921 ']' 00:29:33.876 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:33.876 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:33.876 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:33.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:33.876 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:33.876 21:59:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:34.147 21:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:34.147 21:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:34.147 21:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:29:34.147 [2024-11-29 21:59:06.277944] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:34.147 21:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:29:34.420 [2024-11-29 21:59:06.466635] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:29:34.420 21:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:34.678 NVMe0n1 00:29:34.678 21:59:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:34.936 00:29:34.936 21:59:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:35.194 00:29:35.194 21:59:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:35.194 21:59:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:35.452 21:59:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:35.710 21:59:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:38.988 21:59:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:38.988 21:59:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:38.988 21:59:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3187736 00:29:38.988 21:59:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:38.988 21:59:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3187736 00:29:39.922 { 00:29:39.922 "results": [ 00:29:39.922 { 00:29:39.922 "job": "NVMe0n1", 00:29:39.922 "core_mask": "0x1", 00:29:39.922 "workload": "verify", 00:29:39.922 "status": "finished", 00:29:39.922 "verify_range": { 00:29:39.922 "start": 0, 00:29:39.922 "length": 16384 00:29:39.922 }, 00:29:39.922 "queue_depth": 128, 00:29:39.922 "io_size": 4096, 00:29:39.922 "runtime": 1.010813, 00:29:39.922 "iops": 18234.826817621062, 00:29:39.922 "mibps": 71.22979225633227, 00:29:39.922 "io_failed": 0, 00:29:39.922 "io_timeout": 0, 00:29:39.922 "avg_latency_us": 6981.985066666667, 00:29:39.922 "min_latency_us": 2700.0832, 00:29:39.922 "max_latency_us": 12058.624 00:29:39.922 } 00:29:39.922 ], 00:29:39.922 "core_count": 1 00:29:39.922 } 00:29:39.922 21:59:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:39.922 [2024-11-29 21:59:05.909114] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:39.922 [2024-11-29 21:59:05.909175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3186921 ] 00:29:39.922 [2024-11-29 21:59:05.980768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.922 [2024-11-29 21:59:06.016128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.922 [2024-11-29 21:59:07.679788] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:29:39.922 [2024-11-29 21:59:07.680398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.922 [2024-11-29 21:59:07.680429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.922 [2024-11-29 21:59:07.700915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:39.922 [2024-11-29 21:59:07.717276] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:39.922 Running I/O for 1 seconds... 00:29:39.922 18210.00 IOPS, 71.13 MiB/s 00:29:39.922 Latency(us) 00:29:39.922 [2024-11-29T20:59:12.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.922 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:39.922 Verification LBA range: start 0x0 length 0x4000 00:29:39.922 NVMe0n1 : 1.01 18234.83 71.23 0.00 0.00 6981.99 2700.08 12058.62 00:29:39.922 [2024-11-29T20:59:12.170Z] =================================================================================================================== 00:29:39.922 [2024-11-29T20:59:12.170Z] Total : 18234.83 71.23 0.00 0.00 6981.99 2700.08 12058.62 00:29:39.922 21:59:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:39.922 21:59:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:40.180 21:59:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:40.439 21:59:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:40.439 21:59:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:40.439 21:59:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:40.696 21:59:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:43.971 21:59:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:43.971 21:59:15 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:43.971 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3186921 00:29:43.971 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3186921 ']' 00:29:43.971 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3186921 00:29:43.971 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:43.971 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:43.971 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3186921 00:29:43.971 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:43.971 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:43.971 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3186921' 00:29:43.971 killing process with pid 3186921 00:29:43.971 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3186921 00:29:43.971 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3186921 00:29:44.228 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:44.228 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.228 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:44.228 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:44.228 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:44.228 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:44.228 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:29:44.228 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:44.228 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:44.228 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:29:44.228 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:44.487 rmmod nvme_rdma 00:29:44.487 rmmod nvme_fabrics 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 3183930 ']' 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 3183930 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3183930 ']' 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3183930 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3183930 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3183930' 00:29:44.487 killing process with pid 3183930 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3183930 00:29:44.487 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3183930 00:29:44.746 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:44.746 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:29:44.746 00:29:44.746 real 0m35.654s 00:29:44.746 user 1m58.621s 00:29:44.746 sys 0m7.209s 00:29:44.746 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:44.746 21:59:16 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:44.746 ************************************ 00:29:44.746 END TEST nvmf_failover 00:29:44.746 ************************************ 00:29:44.746 21:59:16 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:29:44.746 21:59:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:44.746 21:59:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:44.746 21:59:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.746 ************************************ 00:29:44.746 START TEST nvmf_host_discovery 00:29:44.746 ************************************ 00:29:44.746 21:59:16 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:29:45.005 * Looking for test storage... 00:29:45.005 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.005 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:45.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.005 --rc genhtml_branch_coverage=1 00:29:45.005 --rc genhtml_function_coverage=1 00:29:45.005 --rc genhtml_legend=1 00:29:45.006 --rc geninfo_all_blocks=1 00:29:45.006 --rc geninfo_unexecuted_blocks=1 00:29:45.006 00:29:45.006 ' 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:45.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.006 --rc genhtml_branch_coverage=1 00:29:45.006 --rc genhtml_function_coverage=1 00:29:45.006 --rc genhtml_legend=1 00:29:45.006 --rc geninfo_all_blocks=1 00:29:45.006 --rc geninfo_unexecuted_blocks=1 00:29:45.006 00:29:45.006 ' 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:45.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.006 --rc genhtml_branch_coverage=1 00:29:45.006 --rc genhtml_function_coverage=1 00:29:45.006 --rc genhtml_legend=1 00:29:45.006 --rc geninfo_all_blocks=1 00:29:45.006 --rc geninfo_unexecuted_blocks=1 00:29:45.006 00:29:45.006 ' 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:45.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.006 --rc genhtml_branch_coverage=1 00:29:45.006 --rc genhtml_function_coverage=1 00:29:45.006 --rc genhtml_legend=1 00:29:45.006 --rc geninfo_all_blocks=1 00:29:45.006 --rc geninfo_unexecuted_blocks=1 00:29:45.006 00:29:45.006 ' 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:45.006 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:29:45.006 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:29:45.006 00:29:45.006 real 0m0.230s 00:29:45.006 user 0m0.126s 00:29:45.006 sys 0m0.122s 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.006 ************************************ 00:29:45.006 END TEST nvmf_host_discovery 00:29:45.006 ************************************ 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.006 ************************************ 00:29:45.006 START TEST nvmf_host_multipath_status 00:29:45.006 ************************************ 00:29:45.006 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:29:45.266 * Looking for test storage... 00:29:45.266 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:45.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.266 --rc genhtml_branch_coverage=1 00:29:45.266 --rc genhtml_function_coverage=1 00:29:45.266 --rc genhtml_legend=1 00:29:45.266 --rc geninfo_all_blocks=1 00:29:45.266 --rc geninfo_unexecuted_blocks=1 00:29:45.266 00:29:45.266 ' 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:45.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.266 --rc genhtml_branch_coverage=1 00:29:45.266 --rc genhtml_function_coverage=1 00:29:45.266 --rc genhtml_legend=1 00:29:45.266 --rc geninfo_all_blocks=1 00:29:45.266 --rc geninfo_unexecuted_blocks=1 00:29:45.266 00:29:45.266 ' 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:45.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.266 --rc genhtml_branch_coverage=1 00:29:45.266 --rc genhtml_function_coverage=1 00:29:45.266 --rc genhtml_legend=1 00:29:45.266 --rc geninfo_all_blocks=1 00:29:45.266 --rc geninfo_unexecuted_blocks=1 00:29:45.266 00:29:45.266 ' 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:45.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.266 --rc genhtml_branch_coverage=1 00:29:45.266 --rc genhtml_function_coverage=1 00:29:45.266 --rc genhtml_legend=1 00:29:45.266 --rc geninfo_all_blocks=1 00:29:45.266 --rc geninfo_unexecuted_blocks=1 00:29:45.266 00:29:45.266 ' 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.266 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:45.267 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:29:45.267 21:59:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:51.823 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:51.824 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:51.824 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:51.824 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:51.824 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # rdma_device_init 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:51.824 21:59:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@526 -- # allocate_nic_ips 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:51.824 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:51.824 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:51.824 altname enp217s0f0np0 00:29:51.824 altname ens818f0np0 00:29:51.824 inet 192.168.100.8/24 scope global mlx_0_0 00:29:51.824 valid_lft forever preferred_lft forever 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:51.824 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:52.084 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:52.084 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:52.084 altname enp217s0f1np1 00:29:52.084 altname ens818f1np1 00:29:52.084 inet 192.168.100.9/24 scope global mlx_0_1 00:29:52.084 valid_lft forever preferred_lft forever 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:29:52.084 192.168.100.9' 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # head -n 1 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:29:52.084 192.168.100.9' 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:29:52.084 192.168.100.9' 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # tail -n +2 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # head -n 1 00:29:52.084 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=3192031 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 3192031 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3192031 ']' 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:52.085 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:52.085 [2024-11-29 21:59:24.239112] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:52.085 [2024-11-29 21:59:24.239163] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.085 [2024-11-29 21:59:24.309163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:52.345 [2024-11-29 21:59:24.348252] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.345 [2024-11-29 21:59:24.348285] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.345 [2024-11-29 21:59:24.348295] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.345 [2024-11-29 21:59:24.348304] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.345 [2024-11-29 21:59:24.348310] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.345 [2024-11-29 21:59:24.348356] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.345 [2024-11-29 21:59:24.348358] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.345 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:52.345 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:29:52.345 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:52.345 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:52.345 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:52.345 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.345 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3192031 00:29:52.345 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:52.604 [2024-11-29 21:59:24.668503] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2306910/0x230adc0) succeed. 00:29:52.604 [2024-11-29 21:59:24.678495] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2307dc0/0x234c460) succeed. 00:29:52.604 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:52.863 Malloc0 00:29:52.863 21:59:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:53.121 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.121 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:53.378 [2024-11-29 21:59:25.524619] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:53.378 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:29:53.637 [2024-11-29 21:59:25.716965] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:53.637 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3192318 00:29:53.637 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:53.637 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:53.637 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3192318 /var/tmp/bdevperf.sock 00:29:53.637 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3192318 ']' 00:29:53.637 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:53.637 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:53.637 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:53.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:53.637 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:53.637 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:53.896 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:53.896 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:29:53.896 21:59:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:54.154 21:59:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:29:54.412 Nvme0n1 00:29:54.412 21:59:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:54.671 Nvme0n1 00:29:54.671 21:59:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:29:54.671 21:59:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:56.574 21:59:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:29:56.574 21:59:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:29:56.833 21:59:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:29:57.091 21:59:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:29:58.027 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:29:58.027 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:58.027 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:58.027 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:58.286 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:58.286 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:58.286 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:58.286 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:58.286 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:58.286 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:58.286 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:58.286 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:58.545 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:58.545 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:58.545 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:58.545 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:58.803 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:58.803 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:58.803 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:58.803 21:59:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:58.803 21:59:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:58.803 21:59:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:58.803 21:59:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:58.803 21:59:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:59.063 21:59:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:59.063 21:59:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:29:59.063 21:59:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:29:59.321 21:59:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:29:59.580 21:59:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:00.516 21:59:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:00.516 21:59:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:00.516 21:59:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.516 21:59:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:00.775 21:59:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:00.775 21:59:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:00.775 21:59:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.775 21:59:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:00.775 21:59:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:00.775 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:00.775 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.775 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:01.033 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:01.033 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:01.033 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:01.033 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:01.292 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:01.292 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:01.292 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:01.292 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:01.550 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:01.550 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:01.550 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:01.550 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:01.550 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:01.550 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:01.550 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:01.808 21:59:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:30:02.067 21:59:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:03.009 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:03.009 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:03.009 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:03.009 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:03.266 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:03.266 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:03.266 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:03.266 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:03.523 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:03.524 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:03.524 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:03.524 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:03.524 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:03.524 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:03.524 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:03.524 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:03.781 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:03.781 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:03.781 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:03.781 21:59:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:04.039 21:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:04.039 21:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:04.039 21:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:04.039 21:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:04.297 21:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:04.297 21:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:04.297 21:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:04.297 21:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:30:04.555 21:59:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:05.490 21:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:05.490 21:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:05.490 21:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:05.490 21:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:05.748 21:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:05.748 21:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:05.748 21:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:05.748 21:59:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:06.007 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:06.007 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:06.007 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:06.007 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:06.265 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:06.265 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:06.265 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:06.265 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:06.265 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:06.265 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:06.265 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:06.265 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:06.524 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:06.524 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:06.524 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:06.524 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:06.783 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:06.783 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:06.783 21:59:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:30:07.042 21:59:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:30:07.042 21:59:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:08.419 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:08.419 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:08.419 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.419 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:08.419 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:08.419 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:08.419 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.419 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:08.419 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:08.419 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:08.419 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.419 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:08.678 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:08.678 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:08.678 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.678 21:59:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:08.981 21:59:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:08.981 21:59:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:08.981 21:59:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.981 21:59:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:09.277 21:59:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:09.277 21:59:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:09.277 21:59:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:09.277 21:59:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:09.277 21:59:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:09.278 21:59:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:09.278 21:59:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:30:09.559 21:59:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:09.833 21:59:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:10.767 21:59:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:10.767 21:59:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:10.767 21:59:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:10.767 21:59:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:11.025 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:11.025 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:11.025 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:11.025 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:11.025 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:11.025 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:11.025 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:11.025 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:11.282 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:11.282 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:11.283 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:11.283 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:11.540 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:11.540 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:11.540 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:11.540 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:11.540 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:11.540 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:11.540 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:11.540 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:11.798 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:11.798 21:59:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:12.055 21:59:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:12.055 21:59:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:30:12.314 21:59:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:12.314 21:59:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:13.684 21:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:13.684 21:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:13.684 21:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.684 21:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:13.684 21:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.684 21:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:13.684 21:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.684 21:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:13.684 21:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.684 21:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:13.942 21:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.942 21:59:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:13.942 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.942 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:13.942 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.942 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:14.199 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.199 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:14.200 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.200 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:14.457 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.457 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:14.457 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.457 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:14.457 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.457 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:14.457 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:14.715 21:59:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:30:14.973 21:59:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:15.906 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:15.906 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:15.906 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.906 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:16.164 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:16.164 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:16.164 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.164 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:16.421 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.421 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:16.421 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.421 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:16.679 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.679 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:16.679 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.679 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:16.679 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.679 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:16.679 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.679 21:59:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:16.937 21:59:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.937 21:59:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:16.937 21:59:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.938 21:59:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:17.196 21:59:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:17.196 21:59:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:17.196 21:59:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:17.454 21:59:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:30:17.454 21:59:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:18.825 21:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:18.825 21:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:18.825 21:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.825 21:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:18.825 21:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.825 21:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:18.825 21:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.825 21:59:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:18.825 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.825 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:18.825 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.825 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:19.083 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.083 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:19.083 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.083 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:19.340 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.340 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:19.340 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.340 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:19.598 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.598 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:19.598 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.598 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:19.598 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.598 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:19.598 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:30:19.855 21:59:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:30:20.112 21:59:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:21.044 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:21.044 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:21.044 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.044 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:21.302 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.302 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:21.302 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.302 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:21.559 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:21.559 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:21.559 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.559 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:21.559 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.559 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:21.559 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.559 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:21.816 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.816 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:21.816 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.816 21:59:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:22.073 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.073 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:22.073 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.073 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:22.330 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:22.330 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3192318 00:30:22.330 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3192318 ']' 00:30:22.330 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3192318 00:30:22.330 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:22.330 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:22.330 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3192318 00:30:22.330 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:30:22.330 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:30:22.330 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3192318' 00:30:22.330 killing process with pid 3192318 00:30:22.330 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3192318 00:30:22.330 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3192318 00:30:22.330 { 00:30:22.330 "results": [ 00:30:22.330 { 00:30:22.330 "job": "Nvme0n1", 00:30:22.330 "core_mask": "0x4", 00:30:22.330 "workload": "verify", 00:30:22.330 "status": "terminated", 00:30:22.330 "verify_range": { 00:30:22.330 "start": 0, 00:30:22.330 "length": 16384 00:30:22.330 }, 00:30:22.330 "queue_depth": 128, 00:30:22.330 "io_size": 4096, 00:30:22.330 "runtime": 27.569003, 00:30:22.330 "iops": 16180.708457248164, 00:30:22.330 "mibps": 63.20589241112564, 00:30:22.330 "io_failed": 0, 00:30:22.330 "io_timeout": 0, 00:30:22.330 "avg_latency_us": 7891.257783096533, 00:30:22.330 "min_latency_us": 825.7536, 00:30:22.330 "max_latency_us": 3033320.6528 00:30:22.330 } 00:30:22.330 ], 00:30:22.330 "core_count": 1 00:30:22.330 } 00:30:22.593 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3192318 00:30:22.593 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:22.593 [2024-11-29 21:59:25.778173] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:30:22.593 [2024-11-29 21:59:25.778235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192318 ] 00:30:22.593 [2024-11-29 21:59:25.844889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.593 [2024-11-29 21:59:25.883640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.593 [2024-11-29 21:59:26.658112] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:30:22.593 Running I/O for 90 seconds... 00:30:22.593 18591.00 IOPS, 72.62 MiB/s [2024-11-29T20:59:54.841Z] 18752.00 IOPS, 73.25 MiB/s [2024-11-29T20:59:54.841Z] 18816.00 IOPS, 73.50 MiB/s [2024-11-29T20:59:54.841Z] 18806.25 IOPS, 73.46 MiB/s [2024-11-29T20:59:54.841Z] 18808.20 IOPS, 73.47 MiB/s [2024-11-29T20:59:54.841Z] 18854.50 IOPS, 73.65 MiB/s [2024-11-29T20:59:54.841Z] 18875.57 IOPS, 73.73 MiB/s [2024-11-29T20:59:54.841Z] 18883.12 IOPS, 73.76 MiB/s [2024-11-29T20:59:54.841Z] 18876.56 IOPS, 73.74 MiB/s [2024-11-29T20:59:54.841Z] 18857.90 IOPS, 73.66 MiB/s [2024-11-29T20:59:54.841Z] 18861.45 IOPS, 73.68 MiB/s [2024-11-29T20:59:54.841Z] 18853.67 IOPS, 73.65 MiB/s [2024-11-29T20:59:54.841Z] [2024-11-29 21:59:39.045117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.593 [2024-11-29 21:59:39.045361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.593 [2024-11-29 21:59:39.045382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:22.593 [2024-11-29 21:59:39.045557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x182700 00:30:22.593 [2024-11-29 21:59:39.045567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.045982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.045990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:22.594 [2024-11-29 21:59:39.046310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182700 00:30:22.594 [2024-11-29 21:59:39.046319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.046978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.046987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.047000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.047009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.047020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.047029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.047040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.047050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.047061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182700 00:30:22.595 [2024-11-29 21:59:39.047070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:22.595 [2024-11-29 21:59:39.047081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182700 00:30:22.596 [2024-11-29 21:59:39.047091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.047102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182700 00:30:22.596 [2024-11-29 21:59:39.047111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.047122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182700 00:30:22.596 [2024-11-29 21:59:39.047131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.047143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182700 00:30:22.596 [2024-11-29 21:59:39.047152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.047163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182700 00:30:22.596 [2024-11-29 21:59:39.047172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.047735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182700 00:30:22.596 [2024-11-29 21:59:39.047748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182700 00:30:22.596 [2024-11-29 21:59:39.048169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182700 00:30:22.596 [2024-11-29 21:59:39.048192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182700 00:30:22.596 [2024-11-29 21:59:39.048213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182700 00:30:22.596 [2024-11-29 21:59:39.048233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182700 00:30:22.596 [2024-11-29 21:59:39.048354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182700 00:30:22.596 [2024-11-29 21:59:39.048375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.596 [2024-11-29 21:59:39.048784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182700 00:30:22.596 [2024-11-29 21:59:39.048804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:22.596 [2024-11-29 21:59:39.048817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.048827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.048838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.048847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.048858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.048867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.048878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.048887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.048899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.048908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.048920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.048929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.597 [2024-11-29 21:59:39.058565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.597 [2024-11-29 21:59:39.058586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.058983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.058996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.059005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.059017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182700 00:30:22.597 [2024-11-29 21:59:39.059026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:22.597 [2024-11-29 21:59:39.059037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.598 [2024-11-29 21:59:39.059786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182700 00:30:22.598 [2024-11-29 21:59:39.059795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.059806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.059815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.059827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.059835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.059847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.059855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.059867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.059875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.059887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.059895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.059907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.059916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.059929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.059937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.059949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.059958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.059969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.059978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.059990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.059999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:39.060632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182700 00:30:22.599 [2024-11-29 21:59:39.060646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:22.599 17737.54 IOPS, 69.29 MiB/s [2024-11-29T20:59:54.847Z] 16470.57 IOPS, 64.34 MiB/s [2024-11-29T20:59:54.847Z] 15372.53 IOPS, 60.05 MiB/s [2024-11-29T20:59:54.847Z] 15307.62 IOPS, 59.80 MiB/s [2024-11-29T20:59:54.847Z] 15525.94 IOPS, 60.65 MiB/s [2024-11-29T20:59:54.847Z] 15623.17 IOPS, 61.03 MiB/s [2024-11-29T20:59:54.847Z] 15612.05 IOPS, 60.98 MiB/s [2024-11-29T20:59:54.847Z] 15602.15 IOPS, 60.95 MiB/s [2024-11-29T20:59:54.847Z] 15758.38 IOPS, 61.56 MiB/s [2024-11-29T20:59:54.847Z] 15909.95 IOPS, 62.15 MiB/s [2024-11-29T20:59:54.847Z] 16016.26 IOPS, 62.56 MiB/s [2024-11-29T20:59:54.847Z] 15985.12 IOPS, 62.44 MiB/s [2024-11-29T20:59:54.847Z] 15958.12 IOPS, 62.34 MiB/s [2024-11-29T20:59:54.847Z] [2024-11-29 21:59:52.163379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.599 [2024-11-29 21:59:52.163421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:52.163993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.599 [2024-11-29 21:59:52.164006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:22.599 [2024-11-29 21:59:52.164019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182700 00:30:22.600 [2024-11-29 21:59:52.164808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.600 [2024-11-29 21:59:52.164849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:22.600 [2024-11-29 21:59:52.164860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x182700 00:30:22.601 [2024-11-29 21:59:52.164869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.164880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182700 00:30:22.601 [2024-11-29 21:59:52.164889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.164900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.601 [2024-11-29 21:59:52.164909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.164921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182700 00:30:22.601 [2024-11-29 21:59:52.164930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.164941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.601 [2024-11-29 21:59:52.164950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.164961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.601 [2024-11-29 21:59:52.164970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.164982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.601 [2024-11-29 21:59:52.164990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182700 00:30:22.601 [2024-11-29 21:59:52.165013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182700 00:30:22.601 [2024-11-29 21:59:52.165033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.601 [2024-11-29 21:59:52.165054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.601 [2024-11-29 21:59:52.165074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.601 [2024-11-29 21:59:52.165095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182700 00:30:22.601 [2024-11-29 21:59:52.165116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x182700 00:30:22.601 [2024-11-29 21:59:52.165136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182700 00:30:22.601 [2024-11-29 21:59:52.165157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182700 00:30:22.601 [2024-11-29 21:59:52.165177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182700 00:30:22.601 [2024-11-29 21:59:52.165197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182700 00:30:22.601 [2024-11-29 21:59:52.165218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.601 [2024-11-29 21:59:52.165239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.601 [2024-11-29 21:59:52.165259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.601 [2024-11-29 21:59:52.165280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.601 [2024-11-29 21:59:52.165300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x182700 00:30:22.601 [2024-11-29 21:59:52.165320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.601 [2024-11-29 21:59:52.165340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x182700 00:30:22.601 [2024-11-29 21:59:52.165360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:22.601 [2024-11-29 21:59:52.165371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:22.601 [2024-11-29 21:59:52.165380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:22.601 16028.31 IOPS, 62.61 MiB/s [2024-11-29T20:59:54.849Z] 16133.48 IOPS, 63.02 MiB/s [2024-11-29T20:59:54.849Z] Received shutdown signal, test time was about 27.569642 seconds 00:30:22.601 00:30:22.601 Latency(us) 00:30:22.601 [2024-11-29T20:59:54.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.601 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:22.601 Verification LBA range: start 0x0 length 0x4000 00:30:22.601 Nvme0n1 : 27.57 16180.71 63.21 0.00 0.00 7891.26 825.75 3033320.65 00:30:22.601 [2024-11-29T20:59:54.849Z] =================================================================================================================== 00:30:22.601 [2024-11-29T20:59:54.849Z] Total : 16180.71 63.21 0.00 0.00 7891.26 825.75 3033320.65 00:30:22.601 [2024-11-29 21:59:54.415100] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:30:22.601 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:22.601 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:22.601 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:22.601 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:22.601 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:22.601 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:30:22.601 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:22.601 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:22.601 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:30:22.601 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:22.601 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:22.601 rmmod nvme_rdma 00:30:22.859 rmmod nvme_fabrics 00:30:22.859 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:22.859 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:30:22.859 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:30:22.859 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 3192031 ']' 00:30:22.859 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 3192031 00:30:22.859 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3192031 ']' 00:30:22.859 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3192031 00:30:22.859 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:22.859 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:22.859 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3192031 00:30:22.859 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:22.859 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:22.859 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3192031' 00:30:22.859 killing process with pid 3192031 00:30:22.860 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3192031 00:30:22.860 21:59:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3192031 00:30:23.117 21:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:23.117 21:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:30:23.117 00:30:23.117 real 0m37.922s 00:30:23.117 user 1m47.786s 00:30:23.117 sys 0m9.151s 00:30:23.117 21:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:23.117 21:59:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:23.117 ************************************ 00:30:23.117 END TEST nvmf_host_multipath_status 00:30:23.117 ************************************ 00:30:23.117 21:59:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:30:23.117 21:59:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:23.117 21:59:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:23.117 21:59:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.117 ************************************ 00:30:23.117 START TEST nvmf_discovery_remove_ifc 00:30:23.117 ************************************ 00:30:23.117 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:30:23.117 * Looking for test storage... 00:30:23.117 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:23.117 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:23.117 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:30:23.117 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:30:23.376 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:23.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.377 --rc genhtml_branch_coverage=1 00:30:23.377 --rc genhtml_function_coverage=1 00:30:23.377 --rc genhtml_legend=1 00:30:23.377 --rc geninfo_all_blocks=1 00:30:23.377 --rc geninfo_unexecuted_blocks=1 00:30:23.377 00:30:23.377 ' 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:23.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.377 --rc genhtml_branch_coverage=1 00:30:23.377 --rc genhtml_function_coverage=1 00:30:23.377 --rc genhtml_legend=1 00:30:23.377 --rc geninfo_all_blocks=1 00:30:23.377 --rc geninfo_unexecuted_blocks=1 00:30:23.377 00:30:23.377 ' 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:23.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.377 --rc genhtml_branch_coverage=1 00:30:23.377 --rc genhtml_function_coverage=1 00:30:23.377 --rc genhtml_legend=1 00:30:23.377 --rc geninfo_all_blocks=1 00:30:23.377 --rc geninfo_unexecuted_blocks=1 00:30:23.377 00:30:23.377 ' 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:23.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.377 --rc genhtml_branch_coverage=1 00:30:23.377 --rc genhtml_function_coverage=1 00:30:23.377 --rc genhtml_legend=1 00:30:23.377 --rc geninfo_all_blocks=1 00:30:23.377 --rc geninfo_unexecuted_blocks=1 00:30:23.377 00:30:23.377 ' 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:23.377 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:30:23.377 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:30:23.377 00:30:23.377 real 0m0.216s 00:30:23.377 user 0m0.125s 00:30:23.377 sys 0m0.107s 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:23.377 ************************************ 00:30:23.377 END TEST nvmf_discovery_remove_ifc 00:30:23.377 ************************************ 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.377 ************************************ 00:30:23.377 START TEST nvmf_identify_kernel_target 00:30:23.377 ************************************ 00:30:23.377 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:30:23.637 * Looking for test storage... 00:30:23.637 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:23.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.637 --rc genhtml_branch_coverage=1 00:30:23.637 --rc genhtml_function_coverage=1 00:30:23.637 --rc genhtml_legend=1 00:30:23.637 --rc geninfo_all_blocks=1 00:30:23.637 --rc geninfo_unexecuted_blocks=1 00:30:23.637 00:30:23.637 ' 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:23.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.637 --rc genhtml_branch_coverage=1 00:30:23.637 --rc genhtml_function_coverage=1 00:30:23.637 --rc genhtml_legend=1 00:30:23.637 --rc geninfo_all_blocks=1 00:30:23.637 --rc geninfo_unexecuted_blocks=1 00:30:23.637 00:30:23.637 ' 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:23.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.637 --rc genhtml_branch_coverage=1 00:30:23.637 --rc genhtml_function_coverage=1 00:30:23.637 --rc genhtml_legend=1 00:30:23.637 --rc geninfo_all_blocks=1 00:30:23.637 --rc geninfo_unexecuted_blocks=1 00:30:23.637 00:30:23.637 ' 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:23.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.637 --rc genhtml_branch_coverage=1 00:30:23.637 --rc genhtml_function_coverage=1 00:30:23.637 --rc genhtml_legend=1 00:30:23.637 --rc geninfo_all_blocks=1 00:30:23.637 --rc geninfo_unexecuted_blocks=1 00:30:23.637 00:30:23.637 ' 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.637 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:23.638 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.638 21:59:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:30:30.202 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:30.203 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:30.203 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:30.203 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:30.203 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # rdma_device_init 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:30.203 22:00:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@526 -- # allocate_nic_ips 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:30.203 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:30.203 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:30.203 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:30.203 altname enp217s0f0np0 00:30:30.203 altname ens818f0np0 00:30:30.204 inet 192.168.100.8/24 scope global mlx_0_0 00:30:30.204 valid_lft forever preferred_lft forever 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:30.204 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:30.204 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:30.204 altname enp217s0f1np1 00:30:30.204 altname ens818f1np1 00:30:30.204 inet 192.168.100.9/24 scope global mlx_0_1 00:30:30.204 valid_lft forever preferred_lft forever 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:30:30.204 192.168.100.9' 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:30:30.204 192.168.100.9' 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # head -n 1 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:30:30.204 192.168.100.9' 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # tail -n +2 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # head -n 1 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:30.204 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:30.205 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:30.205 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:30:30.205 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:30:30.205 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:30:30.205 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:30.205 22:00:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:30:33.488 Waiting for block devices as requested 00:30:33.488 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:33.488 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:33.488 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:33.488 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:33.488 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:33.488 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:33.488 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:33.488 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:33.746 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:33.746 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:33.746 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:34.004 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:34.004 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:34.004 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:34.262 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:34.262 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:34.262 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:34.521 No valid GPT data, bailing 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 192.168.100.8 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo rdma 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:34.521 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:30:34.779 00:30:34.779 Discovery Log Number of Records 2, Generation counter 2 00:30:34.780 =====Discovery Log Entry 0====== 00:30:34.780 trtype: rdma 00:30:34.780 adrfam: ipv4 00:30:34.780 subtype: current discovery subsystem 00:30:34.780 treq: not specified, sq flow control disable supported 00:30:34.780 portid: 1 00:30:34.780 trsvcid: 4420 00:30:34.780 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:34.780 traddr: 192.168.100.8 00:30:34.780 eflags: none 00:30:34.780 rdma_prtype: not specified 00:30:34.780 rdma_qptype: connected 00:30:34.780 rdma_cms: rdma-cm 00:30:34.780 rdma_pkey: 0x0000 00:30:34.780 =====Discovery Log Entry 1====== 00:30:34.780 trtype: rdma 00:30:34.780 adrfam: ipv4 00:30:34.780 subtype: nvme subsystem 00:30:34.780 treq: not specified, sq flow control disable supported 00:30:34.780 portid: 1 00:30:34.780 trsvcid: 4420 00:30:34.780 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:34.780 traddr: 192.168.100.8 00:30:34.780 eflags: none 00:30:34.780 rdma_prtype: not specified 00:30:34.780 rdma_qptype: connected 00:30:34.780 rdma_cms: rdma-cm 00:30:34.780 rdma_pkey: 0x0000 00:30:34.780 22:00:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:30:34.780 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:30:35.039 ===================================================== 00:30:35.039 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:35.040 ===================================================== 00:30:35.040 Controller Capabilities/Features 00:30:35.040 ================================ 00:30:35.040 Vendor ID: 0000 00:30:35.040 Subsystem Vendor ID: 0000 00:30:35.040 Serial Number: d68b9a34cb2bb58fd962 00:30:35.040 Model Number: Linux 00:30:35.040 Firmware Version: 6.8.9-20 00:30:35.040 Recommended Arb Burst: 0 00:30:35.040 IEEE OUI Identifier: 00 00 00 00:30:35.040 Multi-path I/O 00:30:35.040 May have multiple subsystem ports: No 00:30:35.040 May have multiple controllers: No 00:30:35.040 Associated with SR-IOV VF: No 00:30:35.040 Max Data Transfer Size: Unlimited 00:30:35.040 Max Number of Namespaces: 0 00:30:35.040 Max Number of I/O Queues: 1024 00:30:35.040 NVMe Specification Version (VS): 1.3 00:30:35.040 NVMe Specification Version (Identify): 1.3 00:30:35.040 Maximum Queue Entries: 128 00:30:35.040 Contiguous Queues Required: No 00:30:35.040 Arbitration Mechanisms Supported 00:30:35.040 Weighted Round Robin: Not Supported 00:30:35.040 Vendor Specific: Not Supported 00:30:35.040 Reset Timeout: 7500 ms 00:30:35.040 Doorbell Stride: 4 bytes 00:30:35.040 NVM Subsystem Reset: Not Supported 00:30:35.040 Command Sets Supported 00:30:35.040 NVM Command Set: Supported 00:30:35.040 Boot Partition: Not Supported 00:30:35.040 Memory Page Size Minimum: 4096 bytes 00:30:35.040 Memory Page Size Maximum: 4096 bytes 00:30:35.040 Persistent Memory Region: Not Supported 00:30:35.040 Optional Asynchronous Events Supported 00:30:35.040 Namespace Attribute Notices: Not Supported 00:30:35.040 Firmware Activation Notices: Not Supported 00:30:35.040 ANA Change Notices: Not Supported 00:30:35.040 PLE Aggregate Log Change Notices: Not Supported 00:30:35.040 LBA Status Info Alert Notices: Not Supported 00:30:35.040 EGE Aggregate Log Change Notices: Not Supported 00:30:35.040 Normal NVM Subsystem Shutdown event: Not Supported 00:30:35.040 Zone Descriptor Change Notices: Not Supported 00:30:35.040 Discovery Log Change Notices: Supported 00:30:35.040 Controller Attributes 00:30:35.040 128-bit Host Identifier: Not Supported 00:30:35.040 Non-Operational Permissive Mode: Not Supported 00:30:35.040 NVM Sets: Not Supported 00:30:35.040 Read Recovery Levels: Not Supported 00:30:35.040 Endurance Groups: Not Supported 00:30:35.040 Predictable Latency Mode: Not Supported 00:30:35.040 Traffic Based Keep ALive: Not Supported 00:30:35.040 Namespace Granularity: Not Supported 00:30:35.040 SQ Associations: Not Supported 00:30:35.040 UUID List: Not Supported 00:30:35.040 Multi-Domain Subsystem: Not Supported 00:30:35.040 Fixed Capacity Management: Not Supported 00:30:35.040 Variable Capacity Management: Not Supported 00:30:35.040 Delete Endurance Group: Not Supported 00:30:35.040 Delete NVM Set: Not Supported 00:30:35.040 Extended LBA Formats Supported: Not Supported 00:30:35.040 Flexible Data Placement Supported: Not Supported 00:30:35.040 00:30:35.040 Controller Memory Buffer Support 00:30:35.040 ================================ 00:30:35.040 Supported: No 00:30:35.040 00:30:35.040 Persistent Memory Region Support 00:30:35.040 ================================ 00:30:35.040 Supported: No 00:30:35.040 00:30:35.040 Admin Command Set Attributes 00:30:35.040 ============================ 00:30:35.040 Security Send/Receive: Not Supported 00:30:35.040 Format NVM: Not Supported 00:30:35.040 Firmware Activate/Download: Not Supported 00:30:35.040 Namespace Management: Not Supported 00:30:35.040 Device Self-Test: Not Supported 00:30:35.040 Directives: Not Supported 00:30:35.040 NVMe-MI: Not Supported 00:30:35.040 Virtualization Management: Not Supported 00:30:35.040 Doorbell Buffer Config: Not Supported 00:30:35.040 Get LBA Status Capability: Not Supported 00:30:35.040 Command & Feature Lockdown Capability: Not Supported 00:30:35.040 Abort Command Limit: 1 00:30:35.040 Async Event Request Limit: 1 00:30:35.040 Number of Firmware Slots: N/A 00:30:35.040 Firmware Slot 1 Read-Only: N/A 00:30:35.040 Firmware Activation Without Reset: N/A 00:30:35.040 Multiple Update Detection Support: N/A 00:30:35.040 Firmware Update Granularity: No Information Provided 00:30:35.040 Per-Namespace SMART Log: No 00:30:35.040 Asymmetric Namespace Access Log Page: Not Supported 00:30:35.040 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:35.040 Command Effects Log Page: Not Supported 00:30:35.040 Get Log Page Extended Data: Supported 00:30:35.040 Telemetry Log Pages: Not Supported 00:30:35.040 Persistent Event Log Pages: Not Supported 00:30:35.040 Supported Log Pages Log Page: May Support 00:30:35.040 Commands Supported & Effects Log Page: Not Supported 00:30:35.040 Feature Identifiers & Effects Log Page:May Support 00:30:35.040 NVMe-MI Commands & Effects Log Page: May Support 00:30:35.040 Data Area 4 for Telemetry Log: Not Supported 00:30:35.040 Error Log Page Entries Supported: 1 00:30:35.040 Keep Alive: Not Supported 00:30:35.040 00:30:35.040 NVM Command Set Attributes 00:30:35.040 ========================== 00:30:35.040 Submission Queue Entry Size 00:30:35.040 Max: 1 00:30:35.040 Min: 1 00:30:35.040 Completion Queue Entry Size 00:30:35.040 Max: 1 00:30:35.040 Min: 1 00:30:35.040 Number of Namespaces: 0 00:30:35.040 Compare Command: Not Supported 00:30:35.040 Write Uncorrectable Command: Not Supported 00:30:35.040 Dataset Management Command: Not Supported 00:30:35.040 Write Zeroes Command: Not Supported 00:30:35.040 Set Features Save Field: Not Supported 00:30:35.040 Reservations: Not Supported 00:30:35.040 Timestamp: Not Supported 00:30:35.040 Copy: Not Supported 00:30:35.040 Volatile Write Cache: Not Present 00:30:35.040 Atomic Write Unit (Normal): 1 00:30:35.040 Atomic Write Unit (PFail): 1 00:30:35.040 Atomic Compare & Write Unit: 1 00:30:35.040 Fused Compare & Write: Not Supported 00:30:35.040 Scatter-Gather List 00:30:35.040 SGL Command Set: Supported 00:30:35.040 SGL Keyed: Supported 00:30:35.040 SGL Bit Bucket Descriptor: Not Supported 00:30:35.040 SGL Metadata Pointer: Not Supported 00:30:35.041 Oversized SGL: Not Supported 00:30:35.041 SGL Metadata Address: Not Supported 00:30:35.041 SGL Offset: Supported 00:30:35.041 Transport SGL Data Block: Not Supported 00:30:35.041 Replay Protected Memory Block: Not Supported 00:30:35.041 00:30:35.041 Firmware Slot Information 00:30:35.041 ========================= 00:30:35.041 Active slot: 0 00:30:35.041 00:30:35.041 00:30:35.041 Error Log 00:30:35.041 ========= 00:30:35.041 00:30:35.041 Active Namespaces 00:30:35.041 ================= 00:30:35.041 Discovery Log Page 00:30:35.041 ================== 00:30:35.041 Generation Counter: 2 00:30:35.041 Number of Records: 2 00:30:35.041 Record Format: 0 00:30:35.041 00:30:35.041 Discovery Log Entry 0 00:30:35.041 ---------------------- 00:30:35.041 Transport Type: 1 (RDMA) 00:30:35.041 Address Family: 1 (IPv4) 00:30:35.041 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:35.041 Entry Flags: 00:30:35.041 Duplicate Returned Information: 0 00:30:35.041 Explicit Persistent Connection Support for Discovery: 0 00:30:35.041 Transport Requirements: 00:30:35.041 Secure Channel: Not Specified 00:30:35.041 Port ID: 1 (0x0001) 00:30:35.041 Controller ID: 65535 (0xffff) 00:30:35.041 Admin Max SQ Size: 32 00:30:35.041 Transport Service Identifier: 4420 00:30:35.041 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:35.041 Transport Address: 192.168.100.8 00:30:35.041 Transport Specific Address Subtype - RDMA 00:30:35.041 RDMA QP Service Type: 1 (Reliable Connected) 00:30:35.041 RDMA Provider Type: 1 (No provider specified) 00:30:35.041 RDMA CM Service: 1 (RDMA_CM) 00:30:35.041 Discovery Log Entry 1 00:30:35.041 ---------------------- 00:30:35.041 Transport Type: 1 (RDMA) 00:30:35.041 Address Family: 1 (IPv4) 00:30:35.041 Subsystem Type: 2 (NVM Subsystem) 00:30:35.041 Entry Flags: 00:30:35.041 Duplicate Returned Information: 0 00:30:35.041 Explicit Persistent Connection Support for Discovery: 0 00:30:35.041 Transport Requirements: 00:30:35.041 Secure Channel: Not Specified 00:30:35.041 Port ID: 1 (0x0001) 00:30:35.041 Controller ID: 65535 (0xffff) 00:30:35.041 Admin Max SQ Size: 32 00:30:35.041 Transport Service Identifier: 4420 00:30:35.041 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:30:35.041 Transport Address: 192.168.100.8 00:30:35.041 Transport Specific Address Subtype - RDMA 00:30:35.041 RDMA QP Service Type: 1 (Reliable Connected) 00:30:35.041 RDMA Provider Type: 1 (No provider specified) 00:30:35.041 RDMA CM Service: 1 (RDMA_CM) 00:30:35.041 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:35.041 get_feature(0x01) failed 00:30:35.041 get_feature(0x02) failed 00:30:35.041 get_feature(0x04) failed 00:30:35.041 ===================================================== 00:30:35.041 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:30:35.041 ===================================================== 00:30:35.041 Controller Capabilities/Features 00:30:35.041 ================================ 00:30:35.041 Vendor ID: 0000 00:30:35.041 Subsystem Vendor ID: 0000 00:30:35.041 Serial Number: 1c35da1bb02be6259f4c 00:30:35.041 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:30:35.041 Firmware Version: 6.8.9-20 00:30:35.041 Recommended Arb Burst: 6 00:30:35.041 IEEE OUI Identifier: 00 00 00 00:30:35.041 Multi-path I/O 00:30:35.041 May have multiple subsystem ports: Yes 00:30:35.041 May have multiple controllers: Yes 00:30:35.041 Associated with SR-IOV VF: No 00:30:35.041 Max Data Transfer Size: 1048576 00:30:35.041 Max Number of Namespaces: 1024 00:30:35.041 Max Number of I/O Queues: 128 00:30:35.041 NVMe Specification Version (VS): 1.3 00:30:35.041 NVMe Specification Version (Identify): 1.3 00:30:35.041 Maximum Queue Entries: 128 00:30:35.041 Contiguous Queues Required: No 00:30:35.041 Arbitration Mechanisms Supported 00:30:35.041 Weighted Round Robin: Not Supported 00:30:35.041 Vendor Specific: Not Supported 00:30:35.041 Reset Timeout: 7500 ms 00:30:35.041 Doorbell Stride: 4 bytes 00:30:35.041 NVM Subsystem Reset: Not Supported 00:30:35.041 Command Sets Supported 00:30:35.041 NVM Command Set: Supported 00:30:35.041 Boot Partition: Not Supported 00:30:35.041 Memory Page Size Minimum: 4096 bytes 00:30:35.041 Memory Page Size Maximum: 4096 bytes 00:30:35.041 Persistent Memory Region: Not Supported 00:30:35.041 Optional Asynchronous Events Supported 00:30:35.041 Namespace Attribute Notices: Supported 00:30:35.041 Firmware Activation Notices: Not Supported 00:30:35.041 ANA Change Notices: Supported 00:30:35.041 PLE Aggregate Log Change Notices: Not Supported 00:30:35.041 LBA Status Info Alert Notices: Not Supported 00:30:35.041 EGE Aggregate Log Change Notices: Not Supported 00:30:35.041 Normal NVM Subsystem Shutdown event: Not Supported 00:30:35.041 Zone Descriptor Change Notices: Not Supported 00:30:35.041 Discovery Log Change Notices: Not Supported 00:30:35.041 Controller Attributes 00:30:35.041 128-bit Host Identifier: Supported 00:30:35.041 Non-Operational Permissive Mode: Not Supported 00:30:35.041 NVM Sets: Not Supported 00:30:35.041 Read Recovery Levels: Not Supported 00:30:35.041 Endurance Groups: Not Supported 00:30:35.041 Predictable Latency Mode: Not Supported 00:30:35.041 Traffic Based Keep ALive: Supported 00:30:35.041 Namespace Granularity: Not Supported 00:30:35.041 SQ Associations: Not Supported 00:30:35.041 UUID List: Not Supported 00:30:35.041 Multi-Domain Subsystem: Not Supported 00:30:35.041 Fixed Capacity Management: Not Supported 00:30:35.041 Variable Capacity Management: Not Supported 00:30:35.041 Delete Endurance Group: Not Supported 00:30:35.041 Delete NVM Set: Not Supported 00:30:35.041 Extended LBA Formats Supported: Not Supported 00:30:35.041 Flexible Data Placement Supported: Not Supported 00:30:35.041 00:30:35.041 Controller Memory Buffer Support 00:30:35.041 ================================ 00:30:35.041 Supported: No 00:30:35.041 00:30:35.041 Persistent Memory Region Support 00:30:35.041 ================================ 00:30:35.041 Supported: No 00:30:35.041 00:30:35.041 Admin Command Set Attributes 00:30:35.041 ============================ 00:30:35.041 Security Send/Receive: Not Supported 00:30:35.041 Format NVM: Not Supported 00:30:35.041 Firmware Activate/Download: Not Supported 00:30:35.041 Namespace Management: Not Supported 00:30:35.041 Device Self-Test: Not Supported 00:30:35.041 Directives: Not Supported 00:30:35.041 NVMe-MI: Not Supported 00:30:35.041 Virtualization Management: Not Supported 00:30:35.042 Doorbell Buffer Config: Not Supported 00:30:35.042 Get LBA Status Capability: Not Supported 00:30:35.042 Command & Feature Lockdown Capability: Not Supported 00:30:35.042 Abort Command Limit: 4 00:30:35.042 Async Event Request Limit: 4 00:30:35.042 Number of Firmware Slots: N/A 00:30:35.042 Firmware Slot 1 Read-Only: N/A 00:30:35.042 Firmware Activation Without Reset: N/A 00:30:35.042 Multiple Update Detection Support: N/A 00:30:35.042 Firmware Update Granularity: No Information Provided 00:30:35.042 Per-Namespace SMART Log: Yes 00:30:35.042 Asymmetric Namespace Access Log Page: Supported 00:30:35.042 ANA Transition Time : 10 sec 00:30:35.042 00:30:35.042 Asymmetric Namespace Access Capabilities 00:30:35.042 ANA Optimized State : Supported 00:30:35.042 ANA Non-Optimized State : Supported 00:30:35.042 ANA Inaccessible State : Supported 00:30:35.042 ANA Persistent Loss State : Supported 00:30:35.042 ANA Change State : Supported 00:30:35.042 ANAGRPID is not changed : No 00:30:35.042 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:30:35.042 00:30:35.042 ANA Group Identifier Maximum : 128 00:30:35.042 Number of ANA Group Identifiers : 128 00:30:35.042 Max Number of Allowed Namespaces : 1024 00:30:35.042 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:30:35.042 Command Effects Log Page: Supported 00:30:35.042 Get Log Page Extended Data: Supported 00:30:35.042 Telemetry Log Pages: Not Supported 00:30:35.042 Persistent Event Log Pages: Not Supported 00:30:35.042 Supported Log Pages Log Page: May Support 00:30:35.042 Commands Supported & Effects Log Page: Not Supported 00:30:35.042 Feature Identifiers & Effects Log Page:May Support 00:30:35.042 NVMe-MI Commands & Effects Log Page: May Support 00:30:35.042 Data Area 4 for Telemetry Log: Not Supported 00:30:35.042 Error Log Page Entries Supported: 128 00:30:35.042 Keep Alive: Supported 00:30:35.042 Keep Alive Granularity: 1000 ms 00:30:35.042 00:30:35.042 NVM Command Set Attributes 00:30:35.042 ========================== 00:30:35.042 Submission Queue Entry Size 00:30:35.042 Max: 64 00:30:35.042 Min: 64 00:30:35.042 Completion Queue Entry Size 00:30:35.042 Max: 16 00:30:35.042 Min: 16 00:30:35.042 Number of Namespaces: 1024 00:30:35.042 Compare Command: Not Supported 00:30:35.042 Write Uncorrectable Command: Not Supported 00:30:35.042 Dataset Management Command: Supported 00:30:35.042 Write Zeroes Command: Supported 00:30:35.042 Set Features Save Field: Not Supported 00:30:35.042 Reservations: Not Supported 00:30:35.042 Timestamp: Not Supported 00:30:35.042 Copy: Not Supported 00:30:35.042 Volatile Write Cache: Present 00:30:35.042 Atomic Write Unit (Normal): 1 00:30:35.042 Atomic Write Unit (PFail): 1 00:30:35.042 Atomic Compare & Write Unit: 1 00:30:35.042 Fused Compare & Write: Not Supported 00:30:35.042 Scatter-Gather List 00:30:35.042 SGL Command Set: Supported 00:30:35.042 SGL Keyed: Supported 00:30:35.042 SGL Bit Bucket Descriptor: Not Supported 00:30:35.042 SGL Metadata Pointer: Not Supported 00:30:35.042 Oversized SGL: Not Supported 00:30:35.042 SGL Metadata Address: Not Supported 00:30:35.042 SGL Offset: Supported 00:30:35.042 Transport SGL Data Block: Not Supported 00:30:35.042 Replay Protected Memory Block: Not Supported 00:30:35.042 00:30:35.042 Firmware Slot Information 00:30:35.042 ========================= 00:30:35.042 Active slot: 0 00:30:35.042 00:30:35.042 Asymmetric Namespace Access 00:30:35.042 =========================== 00:30:35.042 Change Count : 0 00:30:35.042 Number of ANA Group Descriptors : 1 00:30:35.042 ANA Group Descriptor : 0 00:30:35.042 ANA Group ID : 1 00:30:35.042 Number of NSID Values : 1 00:30:35.042 Change Count : 0 00:30:35.042 ANA State : 1 00:30:35.042 Namespace Identifier : 1 00:30:35.042 00:30:35.042 Commands Supported and Effects 00:30:35.042 ============================== 00:30:35.042 Admin Commands 00:30:35.042 -------------- 00:30:35.042 Get Log Page (02h): Supported 00:30:35.042 Identify (06h): Supported 00:30:35.042 Abort (08h): Supported 00:30:35.042 Set Features (09h): Supported 00:30:35.042 Get Features (0Ah): Supported 00:30:35.042 Asynchronous Event Request (0Ch): Supported 00:30:35.042 Keep Alive (18h): Supported 00:30:35.042 I/O Commands 00:30:35.042 ------------ 00:30:35.042 Flush (00h): Supported 00:30:35.042 Write (01h): Supported LBA-Change 00:30:35.042 Read (02h): Supported 00:30:35.042 Write Zeroes (08h): Supported LBA-Change 00:30:35.042 Dataset Management (09h): Supported 00:30:35.042 00:30:35.042 Error Log 00:30:35.042 ========= 00:30:35.042 Entry: 0 00:30:35.042 Error Count: 0x3 00:30:35.042 Submission Queue Id: 0x0 00:30:35.042 Command Id: 0x5 00:30:35.042 Phase Bit: 0 00:30:35.042 Status Code: 0x2 00:30:35.042 Status Code Type: 0x0 00:30:35.042 Do Not Retry: 1 00:30:35.042 Error Location: 0x28 00:30:35.042 LBA: 0x0 00:30:35.042 Namespace: 0x0 00:30:35.042 Vendor Log Page: 0x0 00:30:35.042 ----------- 00:30:35.042 Entry: 1 00:30:35.042 Error Count: 0x2 00:30:35.042 Submission Queue Id: 0x0 00:30:35.042 Command Id: 0x5 00:30:35.042 Phase Bit: 0 00:30:35.042 Status Code: 0x2 00:30:35.042 Status Code Type: 0x0 00:30:35.042 Do Not Retry: 1 00:30:35.042 Error Location: 0x28 00:30:35.042 LBA: 0x0 00:30:35.042 Namespace: 0x0 00:30:35.042 Vendor Log Page: 0x0 00:30:35.042 ----------- 00:30:35.042 Entry: 2 00:30:35.042 Error Count: 0x1 00:30:35.042 Submission Queue Id: 0x0 00:30:35.042 Command Id: 0x0 00:30:35.042 Phase Bit: 0 00:30:35.042 Status Code: 0x2 00:30:35.042 Status Code Type: 0x0 00:30:35.042 Do Not Retry: 1 00:30:35.042 Error Location: 0x28 00:30:35.042 LBA: 0x0 00:30:35.042 Namespace: 0x0 00:30:35.042 Vendor Log Page: 0x0 00:30:35.042 00:30:35.042 Number of Queues 00:30:35.042 ================ 00:30:35.042 Number of I/O Submission Queues: 128 00:30:35.042 Number of I/O Completion Queues: 128 00:30:35.042 00:30:35.042 ZNS Specific Controller Data 00:30:35.042 ============================ 00:30:35.042 Zone Append Size Limit: 0 00:30:35.042 00:30:35.042 00:30:35.042 Active Namespaces 00:30:35.042 ================= 00:30:35.042 get_feature(0x05) failed 00:30:35.042 Namespace ID:1 00:30:35.042 Command Set Identifier: NVM (00h) 00:30:35.042 Deallocate: Supported 00:30:35.042 Deallocated/Unwritten Error: Not Supported 00:30:35.042 Deallocated Read Value: Unknown 00:30:35.043 Deallocate in Write Zeroes: Not Supported 00:30:35.043 Deallocated Guard Field: 0xFFFF 00:30:35.043 Flush: Supported 00:30:35.043 Reservation: Not Supported 00:30:35.043 Namespace Sharing Capabilities: Multiple Controllers 00:30:35.043 Size (in LBAs): 3907029168 (1863GiB) 00:30:35.043 Capacity (in LBAs): 3907029168 (1863GiB) 00:30:35.043 Utilization (in LBAs): 3907029168 (1863GiB) 00:30:35.043 UUID: 96ebd35a-11de-4f78-a515-ba6ccf60f3a9 00:30:35.043 Thin Provisioning: Not Supported 00:30:35.043 Per-NS Atomic Units: Yes 00:30:35.043 Atomic Boundary Size (Normal): 0 00:30:35.043 Atomic Boundary Size (PFail): 0 00:30:35.043 Atomic Boundary Offset: 0 00:30:35.043 NGUID/EUI64 Never Reused: No 00:30:35.043 ANA group ID: 1 00:30:35.043 Namespace Write Protected: No 00:30:35.043 Number of LBA Formats: 1 00:30:35.043 Current LBA Format: LBA Format #00 00:30:35.043 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:35.043 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:35.043 rmmod nvme_rdma 00:30:35.043 rmmod nvme_fabrics 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:35.043 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:30:35.302 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_rdma nvmet 00:30:35.302 22:00:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:30:38.589 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:38.589 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:40.493 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:30:40.493 00:30:40.493 real 0m17.168s 00:30:40.493 user 0m4.405s 00:30:40.493 sys 0m9.881s 00:30:40.493 22:00:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:40.493 22:00:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:40.493 ************************************ 00:30:40.493 END TEST nvmf_identify_kernel_target 00:30:40.493 ************************************ 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.751 ************************************ 00:30:40.751 START TEST nvmf_auth_host 00:30:40.751 ************************************ 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:30:40.751 * Looking for test storage... 00:30:40.751 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.751 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:30:41.010 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:30:41.010 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:41.010 22:00:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:30:41.010 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:41.010 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:41.010 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:41.010 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:30:41.010 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:41.010 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:41.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.010 --rc genhtml_branch_coverage=1 00:30:41.010 --rc genhtml_function_coverage=1 00:30:41.010 --rc genhtml_legend=1 00:30:41.010 --rc geninfo_all_blocks=1 00:30:41.010 --rc geninfo_unexecuted_blocks=1 00:30:41.010 00:30:41.010 ' 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:41.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.011 --rc genhtml_branch_coverage=1 00:30:41.011 --rc genhtml_function_coverage=1 00:30:41.011 --rc genhtml_legend=1 00:30:41.011 --rc geninfo_all_blocks=1 00:30:41.011 --rc geninfo_unexecuted_blocks=1 00:30:41.011 00:30:41.011 ' 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:41.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.011 --rc genhtml_branch_coverage=1 00:30:41.011 --rc genhtml_function_coverage=1 00:30:41.011 --rc genhtml_legend=1 00:30:41.011 --rc geninfo_all_blocks=1 00:30:41.011 --rc geninfo_unexecuted_blocks=1 00:30:41.011 00:30:41.011 ' 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:41.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.011 --rc genhtml_branch_coverage=1 00:30:41.011 --rc genhtml_function_coverage=1 00:30:41.011 --rc genhtml_legend=1 00:30:41.011 --rc geninfo_all_blocks=1 00:30:41.011 --rc geninfo_unexecuted_blocks=1 00:30:41.011 00:30:41.011 ' 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:41.011 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:41.011 22:00:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:47.577 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:47.577 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:30:47.577 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:47.578 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:47.578 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # rdma_device_init 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # allocate_nic_ips 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:47.578 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:47.578 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:47.578 altname enp217s0f0np0 00:30:47.578 altname ens818f0np0 00:30:47.578 inet 192.168.100.8/24 scope global mlx_0_0 00:30:47.578 valid_lft forever preferred_lft forever 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:47.578 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:47.578 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:47.578 altname enp217s0f1np1 00:30:47.578 altname ens818f1np1 00:30:47.578 inet 192.168.100.9/24 scope global mlx_0_1 00:30:47.578 valid_lft forever preferred_lft forever 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:47.578 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:30:47.579 192.168.100.9' 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:30:47.579 192.168.100.9' 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # head -n 1 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:30:47.579 192.168.100.9' 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # tail -n +2 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # head -n 1 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:30:47.579 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:30:47.838 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:30:47.838 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:47.838 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:47.838 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.838 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=3207730 00:30:47.838 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:30:47.838 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 3207730 00:30:47.838 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3207730 ']' 00:30:47.838 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:47.838 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:47.838 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:47.838 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:47.838 22:00:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=8cd09688bd6f985e12c20d4c9fbf4e96 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.kHp 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 8cd09688bd6f985e12c20d4c9fbf4e96 0 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 8cd09688bd6f985e12c20d4c9fbf4e96 0 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=8cd09688bd6f985e12c20d4c9fbf4e96 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.kHp 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.kHp 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kHp 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=213389812f59e7df894b83b7864152ddfb3db7653706ffe24c0a043d355ac280 00:30:48.097 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.i8o 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 213389812f59e7df894b83b7864152ddfb3db7653706ffe24c0a043d355ac280 3 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 213389812f59e7df894b83b7864152ddfb3db7653706ffe24c0a043d355ac280 3 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=213389812f59e7df894b83b7864152ddfb3db7653706ffe24c0a043d355ac280 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.i8o 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.i8o 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.i8o 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=f125ac10878d98376af91780a7b4a34880e08e08b23058d3 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.FsG 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key f125ac10878d98376af91780a7b4a34880e08e08b23058d3 0 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 f125ac10878d98376af91780a7b4a34880e08e08b23058d3 0 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=f125ac10878d98376af91780a7b4a34880e08e08b23058d3 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:30:48.098 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.FsG 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.FsG 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.FsG 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ba945755f303aedf21f899c4976c48f370d0f501d261aec0 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.nvz 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ba945755f303aedf21f899c4976c48f370d0f501d261aec0 2 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ba945755f303aedf21f899c4976c48f370d0f501d261aec0 2 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ba945755f303aedf21f899c4976c48f370d0f501d261aec0 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.nvz 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.nvz 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.nvz 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=41c3d094ae060b7523a21ad441805000 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.wJw 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 41c3d094ae060b7523a21ad441805000 1 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 41c3d094ae060b7523a21ad441805000 1 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=41c3d094ae060b7523a21ad441805000 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.wJw 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.wJw 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.wJw 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ddcada3ce84cde4d754cc73c40e73cf3 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.4kR 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ddcada3ce84cde4d754cc73c40e73cf3 1 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ddcada3ce84cde4d754cc73c40e73cf3 1 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ddcada3ce84cde4d754cc73c40e73cf3 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.4kR 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.4kR 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.4kR 00:30:48.357 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=6096b553ac1b6e4b99fe7d9ae9c734af612941df127db090 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.KBq 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 6096b553ac1b6e4b99fe7d9ae9c734af612941df127db090 2 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 6096b553ac1b6e4b99fe7d9ae9c734af612941df127db090 2 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=6096b553ac1b6e4b99fe7d9ae9c734af612941df127db090 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:30:48.358 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.KBq 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.KBq 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.KBq 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=d8170152f00115904674f1f31d87c19e 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.ZAG 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key d8170152f00115904674f1f31d87c19e 0 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 d8170152f00115904674f1f31d87c19e 0 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=d8170152f00115904674f1f31d87c19e 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.ZAG 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.ZAG 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ZAG 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=b26f35ea50c6fe18ae60595fb0ce629bb7af57b89b809cc4e5907c674a178ad3 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.ZDw 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key b26f35ea50c6fe18ae60595fb0ce629bb7af57b89b809cc4e5907c674a178ad3 3 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 b26f35ea50c6fe18ae60595fb0ce629bb7af57b89b809cc4e5907c674a178ad3 3 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=b26f35ea50c6fe18ae60595fb0ce629bb7af57b89b809cc4e5907c674a178ad3 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.ZDw 00:30:48.648 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.ZDw 00:30:48.649 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ZDw 00:30:48.649 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:30:48.649 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3207730 00:30:48.649 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3207730 ']' 00:30:48.649 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.649 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:48.649 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.649 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:48.649 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kHp 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.i8o ]] 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.i8o 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.FsG 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.nvz ]] 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nvz 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.wJw 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.4kR ]] 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4kR 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.929 22:00:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.929 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.929 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:48.929 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.KBq 00:30:48.929 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.929 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.929 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.929 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ZAG ]] 00:30:48.929 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ZAG 00:30:48.929 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.929 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.929 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ZDw 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:48.930 22:00:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:30:52.215 Waiting for block devices as requested 00:30:52.215 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:52.215 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:52.215 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:52.215 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:52.473 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:52.473 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:52.473 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:52.473 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:52.732 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:52.732 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:52.732 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:52.990 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:52.990 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:52.990 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:53.247 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:53.247 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:53.247 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:54.182 No valid GPT data, bailing 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 192.168.100.8 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo rdma 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:54.182 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:30:54.441 00:30:54.441 Discovery Log Number of Records 2, Generation counter 2 00:30:54.441 =====Discovery Log Entry 0====== 00:30:54.441 trtype: rdma 00:30:54.441 adrfam: ipv4 00:30:54.441 subtype: current discovery subsystem 00:30:54.441 treq: not specified, sq flow control disable supported 00:30:54.441 portid: 1 00:30:54.441 trsvcid: 4420 00:30:54.441 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:54.441 traddr: 192.168.100.8 00:30:54.441 eflags: none 00:30:54.441 rdma_prtype: not specified 00:30:54.441 rdma_qptype: connected 00:30:54.441 rdma_cms: rdma-cm 00:30:54.441 rdma_pkey: 0x0000 00:30:54.441 =====Discovery Log Entry 1====== 00:30:54.441 trtype: rdma 00:30:54.441 adrfam: ipv4 00:30:54.441 subtype: nvme subsystem 00:30:54.441 treq: not specified, sq flow control disable supported 00:30:54.441 portid: 1 00:30:54.441 trsvcid: 4420 00:30:54.441 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:54.441 traddr: 192.168.100.8 00:30:54.441 eflags: none 00:30:54.441 rdma_prtype: not specified 00:30:54.441 rdma_qptype: connected 00:30:54.441 rdma_cms: rdma-cm 00:30:54.441 rdma_pkey: 0x0000 00:30:54.441 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.442 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.700 nvme0n1 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.700 22:00:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.959 nvme0n1 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.959 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.217 nvme0n1 00:30:55.217 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.217 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.217 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.217 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.217 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.217 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.217 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.218 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.476 nvme0n1 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:30:55.476 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.477 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.753 nvme0n1 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:55.753 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:56.011 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:56.011 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:56.011 22:00:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:56.011 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:56.011 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.011 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.011 nvme0n1 00:30:56.011 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.011 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.011 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.011 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.011 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.011 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.011 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.011 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.011 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.011 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.270 nvme0n1 00:30:56.270 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.529 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.788 nvme0n1 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:30:56.788 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.789 22:00:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.047 nvme0n1 00:30:57.047 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.047 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.048 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.307 nvme0n1 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:57.307 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.308 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.566 nvme0n1 00:30:57.566 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.566 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.566 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.566 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.566 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.566 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.824 22:00:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.084 nvme0n1 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.084 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.343 nvme0n1 00:30:58.343 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.343 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.343 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:58.343 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.343 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.343 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.602 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.861 nvme0n1 00:30:58.861 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.861 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.861 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.861 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:58.861 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.861 22:00:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.861 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.862 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.121 nvme0n1 00:30:59.121 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.121 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.121 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.121 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.121 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.121 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.380 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.640 nvme0n1 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.640 22:00:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.209 nvme0n1 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.209 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.778 nvme0n1 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:00.778 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.779 22:00:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.039 nvme0n1 00:31:01.039 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.039 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.039 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.039 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.039 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.039 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.039 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.039 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.039 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.039 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.298 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.558 nvme0n1 00:31:01.558 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.558 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.558 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.558 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.558 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.558 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.558 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.558 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.558 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.558 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:01.816 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:01.817 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:01.817 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.817 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.817 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:01.817 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:01.817 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:01.817 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:01.817 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:01.817 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:01.817 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.817 22:00:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.076 nvme0n1 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.076 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.015 nvme0n1 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:03.015 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.016 22:00:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.585 nvme0n1 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.585 22:00:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.155 nvme0n1 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.155 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.724 nvme0n1 00:31:04.724 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.724 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.724 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.724 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.724 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.724 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.984 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.984 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.984 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.984 22:00:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.984 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.553 nvme0n1 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:05.553 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:05.554 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.554 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.813 nvme0n1 00:31:05.813 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.813 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.813 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.813 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.813 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.813 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.813 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.813 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.813 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.813 22:00:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:05.813 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:05.814 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.814 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.814 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:05.814 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:05.814 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:05.814 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:05.814 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:05.814 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:05.814 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.814 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.074 nvme0n1 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.074 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.333 nvme0n1 00:31:06.333 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.333 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.333 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.333 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.333 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.333 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.333 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.333 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.333 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.333 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.594 nvme0n1 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.594 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.854 22:00:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.854 nvme0n1 00:31:06.854 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.854 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.854 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.854 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.854 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:07.113 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:07.114 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:07.114 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.114 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.373 nvme0n1 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.373 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.632 nvme0n1 00:31:07.632 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.632 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.632 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.632 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.632 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.632 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.633 22:00:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.893 nvme0n1 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.893 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.153 nvme0n1 00:31:08.153 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.153 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.153 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.153 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.153 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.153 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.412 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.671 nvme0n1 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.671 22:00:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.930 nvme0n1 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:08.930 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.931 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.190 nvme0n1 00:31:09.190 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.190 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:09.190 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.190 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:09.449 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.450 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.708 nvme0n1 00:31:09.708 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.708 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.708 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.709 22:00:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.967 nvme0n1 00:31:09.967 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.967 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.967 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.967 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:09.967 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.967 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.226 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.485 nvme0n1 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:10.485 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.486 22:00:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.053 nvme0n1 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.053 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.619 nvme0n1 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.619 22:00:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.878 nvme0n1 00:31:11.878 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.878 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.878 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.878 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.878 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.878 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.878 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.878 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.878 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.878 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.137 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.396 nvme0n1 00:31:12.396 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.396 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.396 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:12.396 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.396 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.396 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.396 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.396 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.396 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.396 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:12.654 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.655 22:00:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.913 nvme0n1 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.913 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.170 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.735 nvme0n1 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:13.735 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.736 22:00:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.303 nvme0n1 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:14.303 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.562 22:00:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.130 nvme0n1 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.130 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:15.131 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:15.131 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:15.131 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:15.131 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:15.131 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:15.131 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.131 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.698 nvme0n1 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:15.698 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:15.699 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:15.957 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:15.957 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.957 22:00:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.524 nvme0n1 00:31:16.524 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.524 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.524 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.524 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.524 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.525 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.783 nvme0n1 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:16.783 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.784 22:00:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.043 nvme0n1 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:17.043 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.044 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.303 nvme0n1 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.303 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.565 nvme0n1 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:17.565 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:17.845 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:17.845 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:17.845 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:17.845 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:17.845 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.845 22:00:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.845 nvme0n1 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.845 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:18.122 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:18.123 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.123 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.123 nvme0n1 00:31:18.123 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.123 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.123 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.123 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.123 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.123 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.123 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.123 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.123 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.123 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.391 nvme0n1 00:31:18.391 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.651 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.910 nvme0n1 00:31:18.910 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.910 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.910 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.910 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.910 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.910 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.910 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.910 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.910 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.910 22:00:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:18.910 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.911 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.169 nvme0n1 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.169 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.428 nvme0n1 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.428 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.686 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.687 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.945 nvme0n1 00:31:19.945 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.945 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.945 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.945 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.946 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.946 22:00:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.946 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.204 nvme0n1 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.205 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.772 nvme0n1 00:31:20.772 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.772 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.772 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.772 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.772 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.772 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.772 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.773 22:00:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.032 nvme0n1 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:21.032 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.033 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.292 nvme0n1 00:31:21.292 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.292 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.292 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.292 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.292 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.292 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.292 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.293 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.293 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.293 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.552 22:00:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.812 nvme0n1 00:31:21.812 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.812 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.812 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.812 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.812 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.812 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.812 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.812 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.812 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.812 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.070 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.070 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.070 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:22.070 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.070 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:22.070 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:22.070 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.071 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.330 nvme0n1 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:22.330 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:22.331 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:22.331 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.331 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:22.331 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:22.331 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:22.331 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.331 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:22.331 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.331 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.590 22:00:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.850 nvme0n1 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.850 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.417 nvme0n1 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.417 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.986 nvme0n1 00:31:23.986 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.986 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.986 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.986 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.986 22:00:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGNkMDk2ODhiZDZmOTg1ZTEyYzIwZDRjOWZiZjRlOTb0Vfg2: 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: ]] 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjEzMzg5ODEyZjU5ZTdkZjg5NGI4M2I3ODY0MTUyZGRmYjNkYjc2NTM3MDZmZmUyNGMwYTA0M2QzNTVhYzI4MFdIHW4=: 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.986 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.555 nvme0n1 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.555 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.556 22:00:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.124 nvme0n1 00:31:25.124 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.124 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:25.124 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:25.384 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.385 22:00:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.954 nvme0n1 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:25.954 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjA5NmI1NTNhYzFiNmU0Yjk5ZmU3ZDlhZTljNzM0YWY2MTI5NDFkZjEyN2RiMDkwIAKieg==: 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: ]] 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDgxNzAxNTJmMDAxMTU5MDQ2NzRmMWYzMWQ4N2MxOWVD86p9: 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.955 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.523 nvme0n1 00:31:26.523 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.523 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.523 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.523 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.523 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjI2ZjM1ZWE1MGM2ZmUxOGFlNjA1OTVmYjBjZTYyOWJiN2FmNTdiODliODA5Y2M0ZTU5MDdjNjc0YTE3OGFkM0bRswM=: 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.783 22:00:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.353 nvme0n1 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.353 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.613 request: 00:31:27.613 { 00:31:27.613 "name": "nvme0", 00:31:27.613 "trtype": "rdma", 00:31:27.613 "traddr": "192.168.100.8", 00:31:27.613 "adrfam": "ipv4", 00:31:27.613 "trsvcid": "4420", 00:31:27.613 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:27.613 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:27.613 "prchk_reftag": false, 00:31:27.613 "prchk_guard": false, 00:31:27.613 "hdgst": false, 00:31:27.613 "ddgst": false, 00:31:27.613 "allow_unrecognized_csi": false, 00:31:27.613 "method": "bdev_nvme_attach_controller", 00:31:27.613 "req_id": 1 00:31:27.613 } 00:31:27.613 Got JSON-RPC error response 00:31:27.613 response: 00:31:27.613 { 00:31:27.613 "code": -5, 00:31:27.613 "message": "Input/output error" 00:31:27.613 } 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:27.613 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.614 request: 00:31:27.614 { 00:31:27.614 "name": "nvme0", 00:31:27.614 "trtype": "rdma", 00:31:27.614 "traddr": "192.168.100.8", 00:31:27.614 "adrfam": "ipv4", 00:31:27.614 "trsvcid": "4420", 00:31:27.614 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:27.614 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:27.614 "prchk_reftag": false, 00:31:27.614 "prchk_guard": false, 00:31:27.614 "hdgst": false, 00:31:27.614 "ddgst": false, 00:31:27.614 "dhchap_key": "key2", 00:31:27.614 "allow_unrecognized_csi": false, 00:31:27.614 "method": "bdev_nvme_attach_controller", 00:31:27.614 "req_id": 1 00:31:27.614 } 00:31:27.614 Got JSON-RPC error response 00:31:27.614 response: 00:31:27.614 { 00:31:27.614 "code": -5, 00:31:27.614 "message": "Input/output error" 00:31:27.614 } 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.614 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.874 request: 00:31:27.874 { 00:31:27.874 "name": "nvme0", 00:31:27.874 "trtype": "rdma", 00:31:27.874 "traddr": "192.168.100.8", 00:31:27.874 "adrfam": "ipv4", 00:31:27.874 "trsvcid": "4420", 00:31:27.874 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:27.874 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:27.874 "prchk_reftag": false, 00:31:27.874 "prchk_guard": false, 00:31:27.874 "hdgst": false, 00:31:27.874 "ddgst": false, 00:31:27.874 "dhchap_key": "key1", 00:31:27.874 "dhchap_ctrlr_key": "ckey2", 00:31:27.874 "allow_unrecognized_csi": false, 00:31:27.874 "method": "bdev_nvme_attach_controller", 00:31:27.874 "req_id": 1 00:31:27.874 } 00:31:27.874 Got JSON-RPC error response 00:31:27.874 response: 00:31:27.874 { 00:31:27.874 "code": -5, 00:31:27.874 "message": "Input/output error" 00:31:27.874 } 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.874 22:00:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.134 nvme0n1 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.134 request: 00:31:28.134 { 00:31:28.134 "name": "nvme0", 00:31:28.134 "dhchap_key": "key1", 00:31:28.134 "dhchap_ctrlr_key": "ckey2", 00:31:28.134 "method": "bdev_nvme_set_keys", 00:31:28.134 "req_id": 1 00:31:28.134 } 00:31:28.134 Got JSON-RPC error response 00:31:28.134 response: 00:31:28.134 { 00:31:28.134 "code": -13, 00:31:28.134 "message": "Permission denied" 00:31:28.134 } 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:28.134 22:01:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:29.513 22:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.513 22:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:29.513 22:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.513 22:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.513 22:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.513 22:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:29.513 22:01:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjEyNWFjMTA4NzhkOTgzNzZhZjkxNzgwYTdiNGEzNDg4MGUwOGUwOGIyMzA1OGQzh9tpQA==: 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: ]] 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmE5NDU3NTVmMzAzYWVkZjIxZjg5OWM0OTc2YzQ4ZjM3MGQwZjUwMWQyNjFhZWMwizbu3g==: 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z rdma ]] 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_FIRST_TARGET_IP 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 192.168.100.8 ]] 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 192.168.100.8 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.451 nvme0n1 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDFjM2QwOTRhZTA2MGI3NTIzYTIxYWQ0NDE4MDUwMDBX35Cb: 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: ]] 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGRjYWRhM2NlODRjZGU0ZDc1NGNjNzNjNDBlNzNjZjOTg3o4: 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.451 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.710 request: 00:31:30.710 { 00:31:30.710 "name": "nvme0", 00:31:30.710 "dhchap_key": "key2", 00:31:30.710 "dhchap_ctrlr_key": "ckey1", 00:31:30.710 "method": "bdev_nvme_set_keys", 00:31:30.710 "req_id": 1 00:31:30.710 } 00:31:30.710 Got JSON-RPC error response 00:31:30.710 response: 00:31:30.710 { 00:31:30.710 "code": -13, 00:31:30.710 "message": "Permission denied" 00:31:30.710 } 00:31:30.710 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:30.710 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:30.710 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:30.710 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:30.710 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:30.710 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.710 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.710 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:30.711 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.711 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.711 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:31:30.711 22:01:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:31:31.672 22:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.672 22:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:31.672 22:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.672 22:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.672 22:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.672 22:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:31:31.672 22:01:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:31:32.608 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:32.608 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.608 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.608 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:32.868 rmmod nvme_rdma 00:31:32.868 rmmod nvme_fabrics 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 3207730 ']' 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 3207730 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3207730 ']' 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3207730 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3207730 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3207730' 00:31:32.868 killing process with pid 3207730 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3207730 00:31:32.868 22:01:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3207730 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_rdma nvmet 00:31:33.126 22:01:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:31:35.661 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:35.661 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:35.661 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:35.661 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:35.921 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:35.921 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:35.921 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:35.921 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:35.921 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:35.921 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:35.921 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:35.921 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:35.921 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:35.921 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:35.921 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:35.921 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:37.828 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:38.087 22:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kHp /tmp/spdk.key-null.FsG /tmp/spdk.key-sha256.wJw /tmp/spdk.key-sha384.KBq /tmp/spdk.key-sha512.ZDw /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:31:38.087 22:01:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:31:41.378 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:41.378 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:41.378 00:31:41.378 real 1m0.508s 00:31:41.378 user 0m54.376s 00:31:41.378 sys 0m15.288s 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.378 ************************************ 00:31:41.378 END TEST nvmf_auth_host 00:31:41.378 ************************************ 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.378 ************************************ 00:31:41.378 START TEST nvmf_bdevperf 00:31:41.378 ************************************ 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:31:41.378 * Looking for test storage... 00:31:41.378 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:41.378 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:41.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.379 --rc genhtml_branch_coverage=1 00:31:41.379 --rc genhtml_function_coverage=1 00:31:41.379 --rc genhtml_legend=1 00:31:41.379 --rc geninfo_all_blocks=1 00:31:41.379 --rc geninfo_unexecuted_blocks=1 00:31:41.379 00:31:41.379 ' 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:41.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.379 --rc genhtml_branch_coverage=1 00:31:41.379 --rc genhtml_function_coverage=1 00:31:41.379 --rc genhtml_legend=1 00:31:41.379 --rc geninfo_all_blocks=1 00:31:41.379 --rc geninfo_unexecuted_blocks=1 00:31:41.379 00:31:41.379 ' 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:41.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.379 --rc genhtml_branch_coverage=1 00:31:41.379 --rc genhtml_function_coverage=1 00:31:41.379 --rc genhtml_legend=1 00:31:41.379 --rc geninfo_all_blocks=1 00:31:41.379 --rc geninfo_unexecuted_blocks=1 00:31:41.379 00:31:41.379 ' 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:41.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.379 --rc genhtml_branch_coverage=1 00:31:41.379 --rc genhtml_function_coverage=1 00:31:41.379 --rc genhtml_legend=1 00:31:41.379 --rc geninfo_all_blocks=1 00:31:41.379 --rc geninfo_unexecuted_blocks=1 00:31:41.379 00:31:41.379 ' 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:41.379 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:41.379 22:01:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:47.947 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:47.947 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:47.947 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:47.947 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:47.947 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:47.947 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:47.947 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:47.947 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:47.948 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:47.948 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:47.948 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:47.948 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # rdma_device_init 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@526 -- # allocate_nic_ips 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:47.948 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:47.948 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:47.948 altname enp217s0f0np0 00:31:47.948 altname ens818f0np0 00:31:47.948 inet 192.168.100.8/24 scope global mlx_0_0 00:31:47.948 valid_lft forever preferred_lft forever 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:47.948 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:47.949 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:47.949 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:47.949 altname enp217s0f1np1 00:31:47.949 altname ens818f1np1 00:31:47.949 inet 192.168.100.9/24 scope global mlx_0_1 00:31:47.949 valid_lft forever preferred_lft forever 00:31:47.949 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:31:47.949 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:47.949 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:47.949 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:31:48.208 192.168.100.9' 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:31:48.208 192.168.100.9' 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # head -n 1 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:31:48.208 192.168.100.9' 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # tail -n +2 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # head -n 1 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3222713 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3222713 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3222713 ']' 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:48.208 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:48.208 [2024-11-29 22:01:20.342171] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:48.208 [2024-11-29 22:01:20.342224] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.208 [2024-11-29 22:01:20.413966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:48.208 [2024-11-29 22:01:20.454469] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.208 [2024-11-29 22:01:20.454509] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.208 [2024-11-29 22:01:20.454520] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.208 [2024-11-29 22:01:20.454530] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.208 [2024-11-29 22:01:20.454537] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.208 [2024-11-29 22:01:20.454582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:48.208 [2024-11-29 22:01:20.454689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:48.208 [2024-11-29 22:01:20.454691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.466 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:48.466 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:31:48.466 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:48.466 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:48.467 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:48.467 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:48.467 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:48.467 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.467 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:48.467 [2024-11-29 22:01:20.635420] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e93710/0x1e97bc0) succeed. 00:31:48.467 [2024-11-29 22:01:20.646063] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e94c60/0x1ed9260) succeed. 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:48.725 Malloc0 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:48.725 [2024-11-29 22:01:20.778542] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:48.725 { 00:31:48.725 "params": { 00:31:48.725 "name": "Nvme$subsystem", 00:31:48.725 "trtype": "$TEST_TRANSPORT", 00:31:48.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:48.725 "adrfam": "ipv4", 00:31:48.725 "trsvcid": "$NVMF_PORT", 00:31:48.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:48.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:48.725 "hdgst": ${hdgst:-false}, 00:31:48.725 "ddgst": ${ddgst:-false} 00:31:48.725 }, 00:31:48.725 "method": "bdev_nvme_attach_controller" 00:31:48.725 } 00:31:48.725 EOF 00:31:48.725 )") 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:31:48.725 22:01:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:31:48.725 "params": { 00:31:48.725 "name": "Nvme1", 00:31:48.725 "trtype": "rdma", 00:31:48.725 "traddr": "192.168.100.8", 00:31:48.725 "adrfam": "ipv4", 00:31:48.725 "trsvcid": "4420", 00:31:48.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:48.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:48.725 "hdgst": false, 00:31:48.725 "ddgst": false 00:31:48.725 }, 00:31:48.725 "method": "bdev_nvme_attach_controller" 00:31:48.725 }' 00:31:48.725 [2024-11-29 22:01:20.831603] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:48.725 [2024-11-29 22:01:20.831656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3222749 ] 00:31:48.725 [2024-11-29 22:01:20.901914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.725 [2024-11-29 22:01:20.940977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.992 Running I/O for 1 seconds... 00:31:49.925 18304.00 IOPS, 71.50 MiB/s 00:31:49.925 Latency(us) 00:31:49.925 [2024-11-29T21:01:22.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.925 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:49.925 Verification LBA range: start 0x0 length 0x4000 00:31:49.925 Nvme1n1 : 1.01 18319.55 71.56 0.00 0.00 6951.36 2621.44 10905.19 00:31:49.925 [2024-11-29T21:01:22.173Z] =================================================================================================================== 00:31:49.925 [2024-11-29T21:01:22.173Z] Total : 18319.55 71.56 0.00 0.00 6951.36 2621.44 10905.19 00:31:50.183 22:01:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3223017 00:31:50.183 22:01:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:31:50.183 22:01:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:50.183 22:01:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:50.183 22:01:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:31:50.183 22:01:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:31:50.183 22:01:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:50.183 22:01:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:50.183 { 00:31:50.183 "params": { 00:31:50.183 "name": "Nvme$subsystem", 00:31:50.183 "trtype": "$TEST_TRANSPORT", 00:31:50.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.183 "adrfam": "ipv4", 00:31:50.183 "trsvcid": "$NVMF_PORT", 00:31:50.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.183 "hdgst": ${hdgst:-false}, 00:31:50.183 "ddgst": ${ddgst:-false} 00:31:50.183 }, 00:31:50.183 "method": "bdev_nvme_attach_controller" 00:31:50.183 } 00:31:50.183 EOF 00:31:50.183 )") 00:31:50.183 22:01:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:31:50.183 22:01:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:31:50.183 22:01:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:31:50.183 22:01:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:31:50.183 "params": { 00:31:50.183 "name": "Nvme1", 00:31:50.183 "trtype": "rdma", 00:31:50.183 "traddr": "192.168.100.8", 00:31:50.183 "adrfam": "ipv4", 00:31:50.183 "trsvcid": "4420", 00:31:50.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:50.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:50.183 "hdgst": false, 00:31:50.183 "ddgst": false 00:31:50.183 }, 00:31:50.183 "method": "bdev_nvme_attach_controller" 00:31:50.183 }' 00:31:50.183 [2024-11-29 22:01:22.365752] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:50.183 [2024-11-29 22:01:22.365808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3223017 ] 00:31:50.441 [2024-11-29 22:01:22.436129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.441 [2024-11-29 22:01:22.471819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.441 Running I/O for 15 seconds... 00:31:52.747 18304.00 IOPS, 71.50 MiB/s [2024-11-29T21:01:25.560Z] 18416.00 IOPS, 71.94 MiB/s [2024-11-29T21:01:25.560Z] 22:01:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3222713 00:31:53.312 22:01:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:31:54.206 16426.67 IOPS, 64.17 MiB/s [2024-11-29T21:01:26.454Z] [2024-11-29 22:01:26.361208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.206 [2024-11-29 22:01:26.361591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.206 [2024-11-29 22:01:26.361600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:54.207 [2024-11-29 22:01:26.361907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.361926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.361945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.361964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.361983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.361993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.207 [2024-11-29 22:01:26.362302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x183000 00:31:54.207 [2024-11-29 22:01:26.362311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.362984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.208 [2024-11-29 22:01:26.362994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183000 00:31:54.208 [2024-11-29 22:01:26.363003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.363604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.363614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.373305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.373336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.373347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.373358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.373368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.373378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183000 00:31:54.209 [2024-11-29 22:01:26.373387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:58813 cdw0:dd025000 sqhd:b152 p:1 m:0 dnr:0 00:31:54.209 [2024-11-29 22:01:26.375290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:54.209 [2024-11-29 22:01:26.375303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:54.210 [2024-11-29 22:01:26.375312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1768 len:8 PRP1 0x0 PRP2 0x0 00:31:54.210 [2024-11-29 22:01:26.375321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.210 [2024-11-29 22:01:26.375363] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019ae4900 was disconnected and freed. reset controller. 00:31:54.210 [2024-11-29 22:01:26.375395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.210 [2024-11-29 22:01:26.375406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58813 cdw0:93e550 sqhd:d912 p:1 m:0 dnr:0 00:31:54.210 [2024-11-29 22:01:26.375418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.210 [2024-11-29 22:01:26.375427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58813 cdw0:93e550 sqhd:d912 p:1 m:0 dnr:0 00:31:54.210 [2024-11-29 22:01:26.375436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.210 [2024-11-29 22:01:26.375445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58813 cdw0:93e550 sqhd:d912 p:1 m:0 dnr:0 00:31:54.210 [2024-11-29 22:01:26.375455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.210 [2024-11-29 22:01:26.375463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:58813 cdw0:93e550 sqhd:d912 p:1 m:0 dnr:0 00:31:54.210 [2024-11-29 22:01:26.391744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:54.210 [2024-11-29 22:01:26.391797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:54.210 [2024-11-29 22:01:26.391830] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:54.210 [2024-11-29 22:01:26.394692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:54.210 [2024-11-29 22:01:26.397440] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:31:54.210 [2024-11-29 22:01:26.397462] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:31:54.210 [2024-11-29 22:01:26.397470] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aed000 00:31:55.443 12320.00 IOPS, 48.12 MiB/s [2024-11-29T21:01:27.691Z] [2024-11-29 22:01:27.401434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:55.443 [2024-11-29 22:01:27.401494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:55.443 [2024-11-29 22:01:27.402097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:55.443 [2024-11-29 22:01:27.402133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:55.443 [2024-11-29 22:01:27.402174] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:31:55.443 [2024-11-29 22:01:27.404170] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:55.443 [2024-11-29 22:01:27.404780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.443 [2024-11-29 22:01:27.416994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:55.443 [2024-11-29 22:01:27.420118] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:31:55.443 [2024-11-29 22:01:27.420144] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:31:55.443 [2024-11-29 22:01:27.420155] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aed000 00:31:56.379 9856.00 IOPS, 38.50 MiB/s [2024-11-29T21:01:28.627Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3222713 Killed "${NVMF_APP[@]}" "$@" 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3224041 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3224041 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3224041 ']' 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:56.379 [2024-11-29 22:01:28.374188] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:56.379 [2024-11-29 22:01:28.374237] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.379 [2024-11-29 22:01:28.424438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:56.379 [2024-11-29 22:01:28.424469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:56.379 [2024-11-29 22:01:28.424643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:56.379 [2024-11-29 22:01:28.424654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:56.379 [2024-11-29 22:01:28.424670] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:31:56.379 [2024-11-29 22:01:28.426409] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:56.379 [2024-11-29 22:01:28.427350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:56.379 [2024-11-29 22:01:28.439205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:56.379 [2024-11-29 22:01:28.441918] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:31:56.379 [2024-11-29 22:01:28.441939] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:31:56.379 [2024-11-29 22:01:28.441947] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019aed000 00:31:56.379 [2024-11-29 22:01:28.446763] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:56.379 [2024-11-29 22:01:28.486375] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.379 [2024-11-29 22:01:28.486419] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.379 [2024-11-29 22:01:28.486428] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.379 [2024-11-29 22:01:28.486437] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.379 [2024-11-29 22:01:28.486444] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.379 [2024-11-29 22:01:28.486487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:56.379 [2024-11-29 22:01:28.486570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:56.379 [2024-11-29 22:01:28.486572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.379 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:56.638 8213.33 IOPS, 32.08 MiB/s [2024-11-29T21:01:28.886Z] [2024-11-29 22:01:28.673634] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x957710/0x95bbc0) succeed. 00:31:56.638 [2024-11-29 22:01:28.683825] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x958c60/0x99d260) succeed. 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:56.638 Malloc0 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:56.638 [2024-11-29 22:01:28.816807] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.638 22:01:28 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3223017 00:31:57.204 [2024-11-29 22:01:29.445941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:57.204 [2024-11-29 22:01:29.445970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:57.204 [2024-11-29 22:01:29.446143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:57.204 [2024-11-29 22:01:29.446154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:57.204 [2024-11-29 22:01:29.446165] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:31:57.204 [2024-11-29 22:01:29.448853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:57.463 [2024-11-29 22:01:29.457066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:57.463 [2024-11-29 22:01:29.501595] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:58.836 7469.14 IOPS, 29.18 MiB/s [2024-11-29T21:01:32.018Z] 8826.25 IOPS, 34.48 MiB/s [2024-11-29T21:01:32.953Z] 9887.33 IOPS, 38.62 MiB/s [2024-11-29T21:01:33.887Z] 10737.20 IOPS, 41.94 MiB/s [2024-11-29T21:01:34.820Z] 11432.27 IOPS, 44.66 MiB/s [2024-11-29T21:01:35.750Z] 12010.17 IOPS, 46.91 MiB/s [2024-11-29T21:01:37.120Z] 12501.23 IOPS, 48.83 MiB/s [2024-11-29T21:01:38.052Z] 12921.21 IOPS, 50.47 MiB/s [2024-11-29T21:01:38.052Z] 13285.40 IOPS, 51.90 MiB/s 00:32:05.804 Latency(us) 00:32:05.804 [2024-11-29T21:01:38.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.804 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:05.804 Verification LBA range: start 0x0 length 0x4000 00:32:05.804 Nvme1n1 : 15.01 13285.05 51.89 10693.32 0.00 5318.25 327.68 1067030.94 00:32:05.804 [2024-11-29T21:01:38.052Z] =================================================================================================================== 00:32:05.804 [2024-11-29T21:01:38.052Z] Total : 13285.05 51.89 10693.32 0.00 5318.25 327.68 1067030.94 00:32:05.804 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:05.804 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:05.804 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.804 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:05.804 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.804 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:05.804 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:05.805 rmmod nvme_rdma 00:32:05.805 rmmod nvme_fabrics 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 3224041 ']' 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 3224041 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3224041 ']' 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3224041 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:05.805 22:01:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3224041 00:32:05.805 22:01:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:05.805 22:01:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:05.805 22:01:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3224041' 00:32:05.805 killing process with pid 3224041 00:32:05.805 22:01:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3224041 00:32:05.805 22:01:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3224041 00:32:06.063 22:01:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:06.063 22:01:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:32:06.063 00:32:06.063 real 0m24.891s 00:32:06.063 user 1m2.441s 00:32:06.063 sys 0m6.315s 00:32:06.063 22:01:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:06.063 22:01:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:06.063 ************************************ 00:32:06.063 END TEST nvmf_bdevperf 00:32:06.063 ************************************ 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.321 ************************************ 00:32:06.321 START TEST nvmf_target_disconnect 00:32:06.321 ************************************ 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:32:06.321 * Looking for test storage... 00:32:06.321 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:32:06.321 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:06.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.581 --rc genhtml_branch_coverage=1 00:32:06.581 --rc genhtml_function_coverage=1 00:32:06.581 --rc genhtml_legend=1 00:32:06.581 --rc geninfo_all_blocks=1 00:32:06.581 --rc geninfo_unexecuted_blocks=1 00:32:06.581 00:32:06.581 ' 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:06.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.581 --rc genhtml_branch_coverage=1 00:32:06.581 --rc genhtml_function_coverage=1 00:32:06.581 --rc genhtml_legend=1 00:32:06.581 --rc geninfo_all_blocks=1 00:32:06.581 --rc geninfo_unexecuted_blocks=1 00:32:06.581 00:32:06.581 ' 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:06.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.581 --rc genhtml_branch_coverage=1 00:32:06.581 --rc genhtml_function_coverage=1 00:32:06.581 --rc genhtml_legend=1 00:32:06.581 --rc geninfo_all_blocks=1 00:32:06.581 --rc geninfo_unexecuted_blocks=1 00:32:06.581 00:32:06.581 ' 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:06.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.581 --rc genhtml_branch_coverage=1 00:32:06.581 --rc genhtml_function_coverage=1 00:32:06.581 --rc genhtml_legend=1 00:32:06.581 --rc geninfo_all_blocks=1 00:32:06.581 --rc geninfo_unexecuted_blocks=1 00:32:06.581 00:32:06.581 ' 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.581 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:06.582 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:32:06.582 22:01:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:32:13.155 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:13.156 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:13.156 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:13.156 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:13.156 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # rdma_device_init 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@526 -- # allocate_nic_ips 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:32:13.156 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:13.156 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:13.156 altname enp217s0f0np0 00:32:13.156 altname ens818f0np0 00:32:13.156 inet 192.168.100.8/24 scope global mlx_0_0 00:32:13.156 valid_lft forever preferred_lft forever 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:32:13.156 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:32:13.157 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:13.157 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:13.157 altname enp217s0f1np1 00:32:13.157 altname ens818f1np1 00:32:13.157 inet 192.168.100.9/24 scope global mlx_0_1 00:32:13.157 valid_lft forever preferred_lft forever 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:32:13.157 192.168.100.9' 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:32:13.157 192.168.100.9' 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # head -n 1 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:32:13.157 192.168.100.9' 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # head -n 1 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # tail -n +2 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:13.157 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:13.417 ************************************ 00:32:13.417 START TEST nvmf_target_disconnect_tc1 00:32:13.417 ************************************ 00:32:13.417 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:32:13.417 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:13.417 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:32:13.417 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:13.417 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:32:13.417 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.417 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:32:13.417 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.417 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:32:13.417 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.417 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:32:13.417 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:32:13.417 22:01:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:13.417 [2024-11-29 22:01:45.545898] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:13.417 [2024-11-29 22:01:45.545948] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:13.417 [2024-11-29 22:01:45.545958] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7000 00:32:14.355 [2024-11-29 22:01:46.550014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:14.355 [2024-11-29 22:01:46.550089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:14.355 [2024-11-29 22:01:46.550134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:32:14.355 [2024-11-29 22:01:46.550165] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:14.355 [2024-11-29 22:01:46.550178] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:32:14.355 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:32:14.355 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:14.355 Initializing NVMe Controllers 00:32:14.355 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:32:14.355 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:14.355 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:14.355 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:14.355 00:32:14.355 real 0m1.137s 00:32:14.355 user 0m0.866s 00:32:14.355 sys 0m0.259s 00:32:14.355 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:14.355 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:14.355 ************************************ 00:32:14.355 END TEST nvmf_target_disconnect_tc1 00:32:14.355 ************************************ 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:14.615 ************************************ 00:32:14.615 START TEST nvmf_target_disconnect_tc2 00:32:14.615 ************************************ 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3229151 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3229151 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3229151 ']' 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:14.615 22:01:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:14.615 [2024-11-29 22:01:46.687623] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:14.615 [2024-11-29 22:01:46.687678] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:14.615 [2024-11-29 22:01:46.771176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:14.615 [2024-11-29 22:01:46.810179] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:14.615 [2024-11-29 22:01:46.810222] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:14.615 [2024-11-29 22:01:46.810232] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:14.615 [2024-11-29 22:01:46.810240] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:14.615 [2024-11-29 22:01:46.810247] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:14.615 [2024-11-29 22:01:46.810372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:32:14.615 [2024-11-29 22:01:46.810505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:32:14.615 [2024-11-29 22:01:46.810613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:32:14.615 [2024-11-29 22:01:46.810615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:15.552 Malloc0 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:15.552 [2024-11-29 22:01:47.608537] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xae58f0/0xaf15d0) succeed. 00:32:15.552 [2024-11-29 22:01:47.619265] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xae6ee0/0xb32c70) succeed. 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:15.552 [2024-11-29 22:01:47.758203] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3229386 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:15.552 22:01:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:18.083 22:01:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3229151 00:32:18.083 22:01:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:32:19.019 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Read completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 Write completed with error (sct=0, sc=8) 00:32:19.020 starting I/O failed 00:32:19.020 [2024-11-29 22:01:50.952725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:19.587 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3229151 Killed "${NVMF_APP[@]}" "$@" 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3229984 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3229984 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3229984 ']' 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:19.587 22:01:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:19.846 [2024-11-29 22:01:51.835161] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:19.846 [2024-11-29 22:01:51.835218] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:19.846 [2024-11-29 22:01:51.923789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Read completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 Write completed with error (sct=0, sc=8) 00:32:19.846 starting I/O failed 00:32:19.846 [2024-11-29 22:01:51.957793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:19.846 [2024-11-29 22:01:51.962312] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:19.846 [2024-11-29 22:01:51.962345] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:19.846 [2024-11-29 22:01:51.962356] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:19.846 [2024-11-29 22:01:51.962365] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:19.846 [2024-11-29 22:01:51.962372] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:19.846 [2024-11-29 22:01:51.962501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:32:19.846 [2024-11-29 22:01:51.962611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:32:19.846 [2024-11-29 22:01:51.962719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:32:19.846 [2024-11-29 22:01:51.962721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:20.781 Malloc0 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:20.781 [2024-11-29 22:01:52.773115] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23728f0/0x237e5d0) succeed. 00:32:20.781 [2024-11-29 22:01:52.783919] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2373ee0/0x23bfc70) succeed. 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:20.781 [2024-11-29 22:01:52.923268] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.781 22:01:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3229386 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Read completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Read completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Read completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Read completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Read completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Read completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Read completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Read completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Read completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Read completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Read completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Read completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 Write completed with error (sct=0, sc=8) 00:32:20.781 starting I/O failed 00:32:20.781 [2024-11-29 22:01:52.962849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.781 [2024-11-29 22:01:52.971438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.781 [2024-11-29 22:01:52.971497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.781 [2024-11-29 22:01:52.971517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.781 [2024-11-29 22:01:52.971528] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.781 [2024-11-29 22:01:52.971537] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:20.781 [2024-11-29 22:01:52.981614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.781 qpair failed and we were unable to recover it. 00:32:20.781 [2024-11-29 22:01:52.991241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.781 [2024-11-29 22:01:52.991286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.781 [2024-11-29 22:01:52.991305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.781 [2024-11-29 22:01:52.991315] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.781 [2024-11-29 22:01:52.991324] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:20.781 [2024-11-29 22:01:53.001674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.781 qpair failed and we were unable to recover it. 00:32:20.781 [2024-11-29 22:01:53.011436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.781 [2024-11-29 22:01:53.011480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.781 [2024-11-29 22:01:53.011499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.781 [2024-11-29 22:01:53.011508] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.781 [2024-11-29 22:01:53.011517] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:20.781 [2024-11-29 22:01:53.021662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.781 qpair failed and we were unable to recover it. 00:32:21.040 [2024-11-29 22:01:53.031298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.040 [2024-11-29 22:01:53.031345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.040 [2024-11-29 22:01:53.031364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.040 [2024-11-29 22:01:53.031373] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.040 [2024-11-29 22:01:53.031382] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.040 [2024-11-29 22:01:53.041608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.040 qpair failed and we were unable to recover it. 00:32:21.040 [2024-11-29 22:01:53.051422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.040 [2024-11-29 22:01:53.051463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.040 [2024-11-29 22:01:53.051481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.040 [2024-11-29 22:01:53.051490] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.040 [2024-11-29 22:01:53.051499] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.040 [2024-11-29 22:01:53.061851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.040 qpair failed and we were unable to recover it. 00:32:21.040 [2024-11-29 22:01:53.071542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.040 [2024-11-29 22:01:53.071588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.040 [2024-11-29 22:01:53.071606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.040 [2024-11-29 22:01:53.071616] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.040 [2024-11-29 22:01:53.071625] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.040 [2024-11-29 22:01:53.081652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.040 qpair failed and we were unable to recover it. 00:32:21.040 [2024-11-29 22:01:53.091550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.040 [2024-11-29 22:01:53.091590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.040 [2024-11-29 22:01:53.091608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.040 [2024-11-29 22:01:53.091617] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.040 [2024-11-29 22:01:53.091626] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.040 [2024-11-29 22:01:53.101983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.040 qpair failed and we were unable to recover it. 00:32:21.040 [2024-11-29 22:01:53.111581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.040 [2024-11-29 22:01:53.111622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.040 [2024-11-29 22:01:53.111640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.040 [2024-11-29 22:01:53.111653] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.040 [2024-11-29 22:01:53.111661] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.040 [2024-11-29 22:01:53.121909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.040 qpair failed and we were unable to recover it. 00:32:21.040 [2024-11-29 22:01:53.131636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.041 [2024-11-29 22:01:53.131678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.041 [2024-11-29 22:01:53.131696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.041 [2024-11-29 22:01:53.131706] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.041 [2024-11-29 22:01:53.131714] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.041 [2024-11-29 22:01:53.141882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.041 qpair failed and we were unable to recover it. 00:32:21.041 [2024-11-29 22:01:53.151735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.041 [2024-11-29 22:01:53.151772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.041 [2024-11-29 22:01:53.151790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.041 [2024-11-29 22:01:53.151799] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.041 [2024-11-29 22:01:53.151808] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.041 [2024-11-29 22:01:53.161788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.041 qpair failed and we were unable to recover it. 00:32:21.041 [2024-11-29 22:01:53.171888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.041 [2024-11-29 22:01:53.171930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.041 [2024-11-29 22:01:53.171948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.041 [2024-11-29 22:01:53.171957] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.041 [2024-11-29 22:01:53.171966] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.041 [2024-11-29 22:01:53.181892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.041 qpair failed and we were unable to recover it. 00:32:21.041 [2024-11-29 22:01:53.191858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.041 [2024-11-29 22:01:53.191899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.041 [2024-11-29 22:01:53.191918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.041 [2024-11-29 22:01:53.191927] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.041 [2024-11-29 22:01:53.191936] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.041 [2024-11-29 22:01:53.201955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.041 qpair failed and we were unable to recover it. 00:32:21.041 [2024-11-29 22:01:53.211994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.041 [2024-11-29 22:01:53.212038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.041 [2024-11-29 22:01:53.212056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.041 [2024-11-29 22:01:53.212066] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.041 [2024-11-29 22:01:53.212075] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.041 [2024-11-29 22:01:53.222228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.041 qpair failed and we were unable to recover it. 00:32:21.041 [2024-11-29 22:01:53.231951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.041 [2024-11-29 22:01:53.231992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.041 [2024-11-29 22:01:53.232011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.041 [2024-11-29 22:01:53.232020] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.041 [2024-11-29 22:01:53.232029] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.041 [2024-11-29 22:01:53.242122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.041 qpair failed and we were unable to recover it. 00:32:21.041 [2024-11-29 22:01:53.252047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.041 [2024-11-29 22:01:53.252088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.041 [2024-11-29 22:01:53.252106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.041 [2024-11-29 22:01:53.252115] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.041 [2024-11-29 22:01:53.252123] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.041 [2024-11-29 22:01:53.262143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.041 qpair failed and we were unable to recover it. 00:32:21.041 [2024-11-29 22:01:53.272102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.041 [2024-11-29 22:01:53.272143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.041 [2024-11-29 22:01:53.272161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.041 [2024-11-29 22:01:53.272170] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.041 [2024-11-29 22:01:53.272179] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.041 [2024-11-29 22:01:53.282319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.041 qpair failed and we were unable to recover it. 00:32:21.299 [2024-11-29 22:01:53.292132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.299 [2024-11-29 22:01:53.292171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.299 [2024-11-29 22:01:53.292192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.299 [2024-11-29 22:01:53.292202] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.299 [2024-11-29 22:01:53.292210] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.299 [2024-11-29 22:01:53.302359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.299 qpair failed and we were unable to recover it. 00:32:21.299 [2024-11-29 22:01:53.312205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.299 [2024-11-29 22:01:53.312247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.299 [2024-11-29 22:01:53.312266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.299 [2024-11-29 22:01:53.312275] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.299 [2024-11-29 22:01:53.312284] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.299 [2024-11-29 22:01:53.322502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.299 qpair failed and we were unable to recover it. 00:32:21.299 [2024-11-29 22:01:53.332092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.299 [2024-11-29 22:01:53.332135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.299 [2024-11-29 22:01:53.332153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.299 [2024-11-29 22:01:53.332162] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.299 [2024-11-29 22:01:53.332171] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.299 [2024-11-29 22:01:53.342457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.299 qpair failed and we were unable to recover it. 00:32:21.299 [2024-11-29 22:01:53.352327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.299 [2024-11-29 22:01:53.352367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.299 [2024-11-29 22:01:53.352385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.299 [2024-11-29 22:01:53.352394] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.299 [2024-11-29 22:01:53.352402] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.299 [2024-11-29 22:01:53.362766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.299 qpair failed and we were unable to recover it. 00:32:21.299 [2024-11-29 22:01:53.372254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.299 [2024-11-29 22:01:53.372298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.299 [2024-11-29 22:01:53.372315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.299 [2024-11-29 22:01:53.372325] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.299 [2024-11-29 22:01:53.372333] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.299 [2024-11-29 22:01:53.382453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.299 qpair failed and we were unable to recover it. 00:32:21.299 [2024-11-29 22:01:53.392425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.299 [2024-11-29 22:01:53.392467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.299 [2024-11-29 22:01:53.392484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.299 [2024-11-29 22:01:53.392494] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.299 [2024-11-29 22:01:53.392502] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.299 [2024-11-29 22:01:53.402774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.299 qpair failed and we were unable to recover it. 00:32:21.299 [2024-11-29 22:01:53.412466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.299 [2024-11-29 22:01:53.412505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.299 [2024-11-29 22:01:53.412524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.299 [2024-11-29 22:01:53.412533] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.299 [2024-11-29 22:01:53.412542] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.299 [2024-11-29 22:01:53.422726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.299 qpair failed and we were unable to recover it. 00:32:21.299 [2024-11-29 22:01:53.432478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.299 [2024-11-29 22:01:53.432518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.299 [2024-11-29 22:01:53.432535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.299 [2024-11-29 22:01:53.432544] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.299 [2024-11-29 22:01:53.432552] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.299 [2024-11-29 22:01:53.442720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.299 qpair failed and we were unable to recover it. 00:32:21.299 [2024-11-29 22:01:53.452574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.299 [2024-11-29 22:01:53.452612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.299 [2024-11-29 22:01:53.452629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.299 [2024-11-29 22:01:53.452639] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.299 [2024-11-29 22:01:53.452647] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.299 [2024-11-29 22:01:53.462712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.299 qpair failed and we were unable to recover it. 00:32:21.299 [2024-11-29 22:01:53.472640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.299 [2024-11-29 22:01:53.472683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.299 [2024-11-29 22:01:53.472704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.299 [2024-11-29 22:01:53.472715] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.299 [2024-11-29 22:01:53.472723] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.299 [2024-11-29 22:01:53.482901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.299 qpair failed and we were unable to recover it. 00:32:21.299 [2024-11-29 22:01:53.492664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.299 [2024-11-29 22:01:53.492709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.299 [2024-11-29 22:01:53.492726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.299 [2024-11-29 22:01:53.492735] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.299 [2024-11-29 22:01:53.492744] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.299 [2024-11-29 22:01:53.502911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.299 qpair failed and we were unable to recover it. 00:32:21.299 [2024-11-29 22:01:53.512822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.299 [2024-11-29 22:01:53.512862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.299 [2024-11-29 22:01:53.512881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.299 [2024-11-29 22:01:53.512890] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.299 [2024-11-29 22:01:53.512898] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.299 [2024-11-29 22:01:53.523053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.299 qpair failed and we were unable to recover it. 00:32:21.299 [2024-11-29 22:01:53.532882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.299 [2024-11-29 22:01:53.532921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.299 [2024-11-29 22:01:53.532939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.299 [2024-11-29 22:01:53.532948] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.299 [2024-11-29 22:01:53.532956] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.299 [2024-11-29 22:01:53.542993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.299 qpair failed and we were unable to recover it. 00:32:21.558 [2024-11-29 22:01:53.552877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.559 [2024-11-29 22:01:53.552923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.559 [2024-11-29 22:01:53.552940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.559 [2024-11-29 22:01:53.552950] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.559 [2024-11-29 22:01:53.552962] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.559 [2024-11-29 22:01:53.563180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.559 qpair failed and we were unable to recover it. 00:32:21.559 [2024-11-29 22:01:53.572940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.559 [2024-11-29 22:01:53.572983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.559 [2024-11-29 22:01:53.573001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.559 [2024-11-29 22:01:53.573010] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.559 [2024-11-29 22:01:53.573019] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.559 [2024-11-29 22:01:53.583272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.559 qpair failed and we were unable to recover it. 00:32:21.559 [2024-11-29 22:01:53.593071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.559 [2024-11-29 22:01:53.593115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.559 [2024-11-29 22:01:53.593133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.559 [2024-11-29 22:01:53.593142] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.559 [2024-11-29 22:01:53.593150] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.559 [2024-11-29 22:01:53.603380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.559 qpair failed and we were unable to recover it. 00:32:21.559 [2024-11-29 22:01:53.613110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.559 [2024-11-29 22:01:53.613154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.559 [2024-11-29 22:01:53.613172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.559 [2024-11-29 22:01:53.613181] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.559 [2024-11-29 22:01:53.613189] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.559 [2024-11-29 22:01:53.623488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.559 qpair failed and we were unable to recover it. 00:32:21.559 [2024-11-29 22:01:53.633103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.559 [2024-11-29 22:01:53.633141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.559 [2024-11-29 22:01:53.633158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.559 [2024-11-29 22:01:53.633167] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.559 [2024-11-29 22:01:53.633176] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.559 [2024-11-29 22:01:53.643327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.559 qpair failed and we were unable to recover it. 00:32:21.559 [2024-11-29 22:01:53.653154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.559 [2024-11-29 22:01:53.653190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.559 [2024-11-29 22:01:53.653208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.559 [2024-11-29 22:01:53.653218] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.559 [2024-11-29 22:01:53.653226] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.559 [2024-11-29 22:01:53.663439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.559 qpair failed and we were unable to recover it. 00:32:21.559 [2024-11-29 22:01:53.673350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.559 [2024-11-29 22:01:53.673390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.559 [2024-11-29 22:01:53.673408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.559 [2024-11-29 22:01:53.673417] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.559 [2024-11-29 22:01:53.673425] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.559 [2024-11-29 22:01:53.683378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.559 qpair failed and we were unable to recover it. 00:32:21.559 [2024-11-29 22:01:53.693470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.559 [2024-11-29 22:01:53.693512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.559 [2024-11-29 22:01:53.693529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.559 [2024-11-29 22:01:53.693538] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.559 [2024-11-29 22:01:53.693547] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.559 [2024-11-29 22:01:53.703419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.559 qpair failed and we were unable to recover it. 00:32:21.559 [2024-11-29 22:01:53.713490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.559 [2024-11-29 22:01:53.713532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.559 [2024-11-29 22:01:53.713549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.559 [2024-11-29 22:01:53.713558] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.559 [2024-11-29 22:01:53.713567] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.559 [2024-11-29 22:01:53.723650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.559 qpair failed and we were unable to recover it. 00:32:21.559 [2024-11-29 22:01:53.733507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.559 [2024-11-29 22:01:53.733542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.559 [2024-11-29 22:01:53.733560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.559 [2024-11-29 22:01:53.733572] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.559 [2024-11-29 22:01:53.733580] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.559 [2024-11-29 22:01:53.743418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.559 qpair failed and we were unable to recover it. 00:32:21.559 [2024-11-29 22:01:53.753637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.559 [2024-11-29 22:01:53.753682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.559 [2024-11-29 22:01:53.753700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.559 [2024-11-29 22:01:53.753709] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.559 [2024-11-29 22:01:53.753718] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.559 [2024-11-29 22:01:53.763790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.559 qpair failed and we were unable to recover it. 00:32:21.559 [2024-11-29 22:01:53.773478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.559 [2024-11-29 22:01:53.773524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.559 [2024-11-29 22:01:53.773542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.559 [2024-11-29 22:01:53.773551] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.559 [2024-11-29 22:01:53.773559] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.559 [2024-11-29 22:01:53.783531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.559 qpair failed and we were unable to recover it. 00:32:21.559 [2024-11-29 22:01:53.793627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.559 [2024-11-29 22:01:53.793677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.559 [2024-11-29 22:01:53.793695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.559 [2024-11-29 22:01:53.793704] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.560 [2024-11-29 22:01:53.793713] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.560 [2024-11-29 22:01:53.803751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.560 qpair failed and we were unable to recover it. 00:32:21.818 [2024-11-29 22:01:53.813819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.818 [2024-11-29 22:01:53.813859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.818 [2024-11-29 22:01:53.813877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.818 [2024-11-29 22:01:53.813886] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.818 [2024-11-29 22:01:53.813895] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.818 [2024-11-29 22:01:53.823970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.818 qpair failed and we were unable to recover it. 00:32:21.818 [2024-11-29 22:01:53.833793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.818 [2024-11-29 22:01:53.833835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.818 [2024-11-29 22:01:53.833852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.818 [2024-11-29 22:01:53.833861] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.818 [2024-11-29 22:01:53.833870] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.818 [2024-11-29 22:01:53.843951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.818 qpair failed and we were unable to recover it. 00:32:21.818 [2024-11-29 22:01:53.853802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.818 [2024-11-29 22:01:53.853845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.818 [2024-11-29 22:01:53.853862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.818 [2024-11-29 22:01:53.853871] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.818 [2024-11-29 22:01:53.853880] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.818 [2024-11-29 22:01:53.863850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.818 qpair failed and we were unable to recover it. 00:32:21.818 [2024-11-29 22:01:53.873867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.819 [2024-11-29 22:01:53.873908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.819 [2024-11-29 22:01:53.873926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.819 [2024-11-29 22:01:53.873936] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.819 [2024-11-29 22:01:53.873944] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.819 [2024-11-29 22:01:53.884042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.819 qpair failed and we were unable to recover it. 00:32:21.819 [2024-11-29 22:01:53.893944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.819 [2024-11-29 22:01:53.893990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.819 [2024-11-29 22:01:53.894008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.819 [2024-11-29 22:01:53.894017] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.819 [2024-11-29 22:01:53.894026] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.819 [2024-11-29 22:01:53.904169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.819 qpair failed and we were unable to recover it. 00:32:21.819 [2024-11-29 22:01:53.914024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.819 [2024-11-29 22:01:53.914066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.819 [2024-11-29 22:01:53.914088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.819 [2024-11-29 22:01:53.914098] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.819 [2024-11-29 22:01:53.914106] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.819 [2024-11-29 22:01:53.924328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.819 qpair failed and we were unable to recover it. 00:32:21.819 [2024-11-29 22:01:53.933961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.819 [2024-11-29 22:01:53.933999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.819 [2024-11-29 22:01:53.934017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.819 [2024-11-29 22:01:53.934026] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.819 [2024-11-29 22:01:53.934034] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.819 [2024-11-29 22:01:53.944223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.819 qpair failed and we were unable to recover it. 00:32:21.819 [2024-11-29 22:01:53.954039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.819 [2024-11-29 22:01:53.954081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.819 [2024-11-29 22:01:53.954098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.819 [2024-11-29 22:01:53.954108] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.819 [2024-11-29 22:01:53.954116] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.819 [2024-11-29 22:01:53.964085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.819 qpair failed and we were unable to recover it. 00:32:21.819 [2024-11-29 22:01:53.974275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.819 [2024-11-29 22:01:53.974311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.819 [2024-11-29 22:01:53.974330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.819 [2024-11-29 22:01:53.974339] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.819 [2024-11-29 22:01:53.974347] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.819 [2024-11-29 22:01:53.984469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.819 qpair failed and we were unable to recover it. 00:32:21.819 [2024-11-29 22:01:53.994255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.819 [2024-11-29 22:01:53.994296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.819 [2024-11-29 22:01:53.994313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.819 [2024-11-29 22:01:53.994322] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.819 [2024-11-29 22:01:53.994334] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.819 [2024-11-29 22:01:54.004471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.819 qpair failed and we were unable to recover it. 00:32:21.819 [2024-11-29 22:01:54.014232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.819 [2024-11-29 22:01:54.014276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.819 [2024-11-29 22:01:54.014295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.819 [2024-11-29 22:01:54.014304] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.819 [2024-11-29 22:01:54.014312] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.819 [2024-11-29 22:01:54.024576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.819 qpair failed and we were unable to recover it. 00:32:21.819 [2024-11-29 22:01:54.034288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.819 [2024-11-29 22:01:54.034336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.819 [2024-11-29 22:01:54.034353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.819 [2024-11-29 22:01:54.034362] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.819 [2024-11-29 22:01:54.034371] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.819 [2024-11-29 22:01:54.044576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.819 qpair failed and we were unable to recover it. 00:32:21.819 [2024-11-29 22:01:54.054355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:21.819 [2024-11-29 22:01:54.054398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:21.819 [2024-11-29 22:01:54.054415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:21.819 [2024-11-29 22:01:54.054424] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:21.819 [2024-11-29 22:01:54.054433] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:21.819 [2024-11-29 22:01:54.064634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:21.819 qpair failed and we were unable to recover it. 00:32:22.078 [2024-11-29 22:01:54.074372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.078 [2024-11-29 22:01:54.074412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.078 [2024-11-29 22:01:54.074430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.078 [2024-11-29 22:01:54.074439] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.078 [2024-11-29 22:01:54.074448] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.078 [2024-11-29 22:01:54.084505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.078 qpair failed and we were unable to recover it. 00:32:22.078 [2024-11-29 22:01:54.094523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.078 [2024-11-29 22:01:54.094562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.078 [2024-11-29 22:01:54.094580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.078 [2024-11-29 22:01:54.094589] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.078 [2024-11-29 22:01:54.094598] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.078 [2024-11-29 22:01:54.104715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.078 qpair failed and we were unable to recover it. 00:32:22.078 [2024-11-29 22:01:54.114470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.079 [2024-11-29 22:01:54.114515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.079 [2024-11-29 22:01:54.114533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.079 [2024-11-29 22:01:54.114542] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.079 [2024-11-29 22:01:54.114551] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.079 [2024-11-29 22:01:54.124910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.079 qpair failed and we were unable to recover it. 00:32:22.079 [2024-11-29 22:01:54.134515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.079 [2024-11-29 22:01:54.134552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.079 [2024-11-29 22:01:54.134570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.079 [2024-11-29 22:01:54.134579] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.079 [2024-11-29 22:01:54.134587] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.079 [2024-11-29 22:01:54.144830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.079 qpair failed and we were unable to recover it. 00:32:22.079 [2024-11-29 22:01:54.154591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.079 [2024-11-29 22:01:54.154633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.079 [2024-11-29 22:01:54.154650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.079 [2024-11-29 22:01:54.154659] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.079 [2024-11-29 22:01:54.154673] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.079 [2024-11-29 22:01:54.164938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.079 qpair failed and we were unable to recover it. 00:32:22.079 [2024-11-29 22:01:54.174639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.079 [2024-11-29 22:01:54.174685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.079 [2024-11-29 22:01:54.174703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.079 [2024-11-29 22:01:54.174715] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.079 [2024-11-29 22:01:54.174724] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.079 [2024-11-29 22:01:54.185003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.079 qpair failed and we were unable to recover it. 00:32:22.079 [2024-11-29 22:01:54.194642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.079 [2024-11-29 22:01:54.194687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.079 [2024-11-29 22:01:54.194704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.079 [2024-11-29 22:01:54.194713] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.079 [2024-11-29 22:01:54.194722] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.079 [2024-11-29 22:01:54.204973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.079 qpair failed and we were unable to recover it. 00:32:22.079 [2024-11-29 22:01:54.214742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.079 [2024-11-29 22:01:54.214779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.079 [2024-11-29 22:01:54.214797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.079 [2024-11-29 22:01:54.214806] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.079 [2024-11-29 22:01:54.214815] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.079 [2024-11-29 22:01:54.225059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.079 qpair failed and we were unable to recover it. 00:32:22.079 [2024-11-29 22:01:54.234817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.079 [2024-11-29 22:01:54.234859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.079 [2024-11-29 22:01:54.234876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.079 [2024-11-29 22:01:54.234885] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.079 [2024-11-29 22:01:54.234894] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.079 [2024-11-29 22:01:54.245225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.079 qpair failed and we were unable to recover it. 00:32:22.079 [2024-11-29 22:01:54.254934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.079 [2024-11-29 22:01:54.254975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.079 [2024-11-29 22:01:54.254992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.079 [2024-11-29 22:01:54.255001] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.079 [2024-11-29 22:01:54.255010] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.079 [2024-11-29 22:01:54.265051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.079 qpair failed and we were unable to recover it. 00:32:22.079 [2024-11-29 22:01:54.274888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.079 [2024-11-29 22:01:54.274925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.079 [2024-11-29 22:01:54.274943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.079 [2024-11-29 22:01:54.274952] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.079 [2024-11-29 22:01:54.274961] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.079 [2024-11-29 22:01:54.285278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.079 qpair failed and we were unable to recover it. 00:32:22.079 [2024-11-29 22:01:54.295083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.079 [2024-11-29 22:01:54.295125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.079 [2024-11-29 22:01:54.295143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.079 [2024-11-29 22:01:54.295152] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.079 [2024-11-29 22:01:54.295161] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.079 [2024-11-29 22:01:54.305233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.079 qpair failed and we were unable to recover it. 00:32:22.079 [2024-11-29 22:01:54.314966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.080 [2024-11-29 22:01:54.315006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.080 [2024-11-29 22:01:54.315024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.080 [2024-11-29 22:01:54.315033] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.080 [2024-11-29 22:01:54.315042] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.339 [2024-11-29 22:01:54.325314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.339 qpair failed and we were unable to recover it. 00:32:22.339 [2024-11-29 22:01:54.335162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.339 [2024-11-29 22:01:54.335210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.339 [2024-11-29 22:01:54.335227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.339 [2024-11-29 22:01:54.335236] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.339 [2024-11-29 22:01:54.335245] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.339 [2024-11-29 22:01:54.345419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.339 qpair failed and we were unable to recover it. 00:32:22.339 [2024-11-29 22:01:54.355273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.339 [2024-11-29 22:01:54.355312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.339 [2024-11-29 22:01:54.355332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.339 [2024-11-29 22:01:54.355341] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.339 [2024-11-29 22:01:54.355350] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.339 [2024-11-29 22:01:54.365620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.339 qpair failed and we were unable to recover it. 00:32:22.339 [2024-11-29 22:01:54.375504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.339 [2024-11-29 22:01:54.375548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.339 [2024-11-29 22:01:54.375565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.339 [2024-11-29 22:01:54.375574] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.339 [2024-11-29 22:01:54.375583] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.339 [2024-11-29 22:01:54.385847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.339 qpair failed and we were unable to recover it. 00:32:22.339 [2024-11-29 22:01:54.395507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.339 [2024-11-29 22:01:54.395547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.339 [2024-11-29 22:01:54.395564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.339 [2024-11-29 22:01:54.395573] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.339 [2024-11-29 22:01:54.395582] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.339 [2024-11-29 22:01:54.405726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.339 qpair failed and we were unable to recover it. 00:32:22.339 [2024-11-29 22:01:54.415408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.339 [2024-11-29 22:01:54.415452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.339 [2024-11-29 22:01:54.415470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.340 [2024-11-29 22:01:54.415479] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.340 [2024-11-29 22:01:54.415488] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.340 [2024-11-29 22:01:54.425823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.340 qpair failed and we were unable to recover it. 00:32:22.340 [2024-11-29 22:01:54.435634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.340 [2024-11-29 22:01:54.435678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.340 [2024-11-29 22:01:54.435696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.340 [2024-11-29 22:01:54.435705] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.340 [2024-11-29 22:01:54.435717] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.340 [2024-11-29 22:01:54.445835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.340 qpair failed and we were unable to recover it. 00:32:22.340 [2024-11-29 22:01:54.455562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.340 [2024-11-29 22:01:54.455604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.340 [2024-11-29 22:01:54.455622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.340 [2024-11-29 22:01:54.455631] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.340 [2024-11-29 22:01:54.455640] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.340 [2024-11-29 22:01:54.466037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.340 qpair failed and we were unable to recover it. 00:32:22.340 [2024-11-29 22:01:54.475833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.340 [2024-11-29 22:01:54.475872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.340 [2024-11-29 22:01:54.475889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.340 [2024-11-29 22:01:54.475898] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.340 [2024-11-29 22:01:54.475907] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.340 [2024-11-29 22:01:54.486160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.340 qpair failed and we were unable to recover it. 00:32:22.340 [2024-11-29 22:01:54.495894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.340 [2024-11-29 22:01:54.495946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.340 [2024-11-29 22:01:54.495963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.340 [2024-11-29 22:01:54.495973] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.340 [2024-11-29 22:01:54.495981] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.340 [2024-11-29 22:01:54.506165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.340 qpair failed and we were unable to recover it. 00:32:22.340 [2024-11-29 22:01:54.515789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.340 [2024-11-29 22:01:54.515828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.340 [2024-11-29 22:01:54.515846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.340 [2024-11-29 22:01:54.515855] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.340 [2024-11-29 22:01:54.515863] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.340 [2024-11-29 22:01:54.526129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.340 qpair failed and we were unable to recover it. 00:32:22.340 [2024-11-29 22:01:54.535895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.340 [2024-11-29 22:01:54.535938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.340 [2024-11-29 22:01:54.535956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.340 [2024-11-29 22:01:54.535965] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.340 [2024-11-29 22:01:54.535973] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.340 [2024-11-29 22:01:54.546231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.340 qpair failed and we were unable to recover it. 00:32:22.340 [2024-11-29 22:01:54.555831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.340 [2024-11-29 22:01:54.555870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.340 [2024-11-29 22:01:54.555888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.340 [2024-11-29 22:01:54.555897] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.340 [2024-11-29 22:01:54.555905] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.340 [2024-11-29 22:01:54.566294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.340 qpair failed and we were unable to recover it. 00:32:22.340 [2024-11-29 22:01:54.576202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.340 [2024-11-29 22:01:54.576242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.340 [2024-11-29 22:01:54.576259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.340 [2024-11-29 22:01:54.576268] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.340 [2024-11-29 22:01:54.576277] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.600 [2024-11-29 22:01:54.586445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.600 qpair failed and we were unable to recover it. 00:32:22.600 [2024-11-29 22:01:54.595963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.600 [2024-11-29 22:01:54.596003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.600 [2024-11-29 22:01:54.596020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.600 [2024-11-29 22:01:54.596029] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.600 [2024-11-29 22:01:54.596038] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.600 [2024-11-29 22:01:54.606493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.600 qpair failed and we were unable to recover it. 00:32:22.600 [2024-11-29 22:01:54.616197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.600 [2024-11-29 22:01:54.616241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.600 [2024-11-29 22:01:54.616259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.600 [2024-11-29 22:01:54.616271] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.600 [2024-11-29 22:01:54.616280] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.600 [2024-11-29 22:01:54.626515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.600 qpair failed and we were unable to recover it. 00:32:22.600 [2024-11-29 22:01:54.636072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.600 [2024-11-29 22:01:54.636111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.600 [2024-11-29 22:01:54.636128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.600 [2024-11-29 22:01:54.636137] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.600 [2024-11-29 22:01:54.636145] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.600 [2024-11-29 22:01:54.646686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.600 qpair failed and we were unable to recover it. 00:32:22.600 [2024-11-29 22:01:54.656337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.600 [2024-11-29 22:01:54.656381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.600 [2024-11-29 22:01:54.656398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.600 [2024-11-29 22:01:54.656407] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.600 [2024-11-29 22:01:54.656416] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.600 [2024-11-29 22:01:54.666674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.600 qpair failed and we were unable to recover it. 00:32:22.600 [2024-11-29 22:01:54.676261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.600 [2024-11-29 22:01:54.676299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.600 [2024-11-29 22:01:54.676316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.600 [2024-11-29 22:01:54.676325] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.600 [2024-11-29 22:01:54.676333] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.600 [2024-11-29 22:01:54.686701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.600 qpair failed and we were unable to recover it. 00:32:22.600 [2024-11-29 22:01:54.696352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.600 [2024-11-29 22:01:54.696389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.600 [2024-11-29 22:01:54.696406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.600 [2024-11-29 22:01:54.696416] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.600 [2024-11-29 22:01:54.696424] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.600 [2024-11-29 22:01:54.706635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.600 qpair failed and we were unable to recover it. 00:32:22.600 [2024-11-29 22:01:54.716393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.601 [2024-11-29 22:01:54.716432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.601 [2024-11-29 22:01:54.716450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.601 [2024-11-29 22:01:54.716459] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.601 [2024-11-29 22:01:54.716468] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.601 [2024-11-29 22:01:54.726879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.601 qpair failed and we were unable to recover it. 00:32:22.601 [2024-11-29 22:01:54.736473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.601 [2024-11-29 22:01:54.736513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.601 [2024-11-29 22:01:54.736530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.601 [2024-11-29 22:01:54.736539] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.601 [2024-11-29 22:01:54.736548] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.601 [2024-11-29 22:01:54.746608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.601 qpair failed and we were unable to recover it. 00:32:22.601 [2024-11-29 22:01:54.756380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.601 [2024-11-29 22:01:54.756422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.601 [2024-11-29 22:01:54.756439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.601 [2024-11-29 22:01:54.756449] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.601 [2024-11-29 22:01:54.756457] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.601 [2024-11-29 22:01:54.766862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.601 qpair failed and we were unable to recover it. 00:32:22.601 [2024-11-29 22:01:54.776643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.601 [2024-11-29 22:01:54.776680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.601 [2024-11-29 22:01:54.776698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.601 [2024-11-29 22:01:54.776707] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.601 [2024-11-29 22:01:54.776716] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.601 [2024-11-29 22:01:54.786754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.601 qpair failed and we were unable to recover it. 00:32:22.601 [2024-11-29 22:01:54.796487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.601 [2024-11-29 22:01:54.796526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.601 [2024-11-29 22:01:54.796546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.601 [2024-11-29 22:01:54.796556] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.601 [2024-11-29 22:01:54.796564] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.601 [2024-11-29 22:01:54.807105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.601 qpair failed and we were unable to recover it. 00:32:22.601 [2024-11-29 22:01:54.816762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.601 [2024-11-29 22:01:54.816801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.601 [2024-11-29 22:01:54.816819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.601 [2024-11-29 22:01:54.816828] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.601 [2024-11-29 22:01:54.816837] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.601 [2024-11-29 22:01:54.826775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.601 qpair failed and we were unable to recover it. 00:32:22.601 [2024-11-29 22:01:54.836628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.601 [2024-11-29 22:01:54.836676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.601 [2024-11-29 22:01:54.836694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.601 [2024-11-29 22:01:54.836703] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.601 [2024-11-29 22:01:54.836712] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.860 [2024-11-29 22:01:54.847133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.860 qpair failed and we were unable to recover it. 00:32:22.860 [2024-11-29 22:01:54.856897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.860 [2024-11-29 22:01:54.856936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.860 [2024-11-29 22:01:54.856953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.860 [2024-11-29 22:01:54.856962] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.860 [2024-11-29 22:01:54.856970] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.860 [2024-11-29 22:01:54.867149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.860 qpair failed and we were unable to recover it. 00:32:22.860 [2024-11-29 22:01:54.876746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.860 [2024-11-29 22:01:54.876787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.860 [2024-11-29 22:01:54.876805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.860 [2024-11-29 22:01:54.876814] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.860 [2024-11-29 22:01:54.876822] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.860 [2024-11-29 22:01:54.887293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.860 qpair failed and we were unable to recover it. 00:32:22.860 [2024-11-29 22:01:54.897095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.860 [2024-11-29 22:01:54.897135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.860 [2024-11-29 22:01:54.897152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.860 [2024-11-29 22:01:54.897161] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.860 [2024-11-29 22:01:54.897170] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.860 [2024-11-29 22:01:54.907271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.860 qpair failed and we were unable to recover it. 00:32:22.860 [2024-11-29 22:01:54.916941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.860 [2024-11-29 22:01:54.916984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.860 [2024-11-29 22:01:54.917002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.860 [2024-11-29 22:01:54.917011] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.860 [2024-11-29 22:01:54.917020] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.860 [2024-11-29 22:01:54.927254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.860 qpair failed and we were unable to recover it. 00:32:22.860 [2024-11-29 22:01:54.937078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.860 [2024-11-29 22:01:54.937119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.860 [2024-11-29 22:01:54.937135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.860 [2024-11-29 22:01:54.937144] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.860 [2024-11-29 22:01:54.937153] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.861 [2024-11-29 22:01:54.947337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.861 qpair failed and we were unable to recover it. 00:32:22.861 [2024-11-29 22:01:54.956988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.861 [2024-11-29 22:01:54.957028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.861 [2024-11-29 22:01:54.957045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.861 [2024-11-29 22:01:54.957054] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.861 [2024-11-29 22:01:54.957063] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.861 [2024-11-29 22:01:54.967618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.861 qpair failed and we were unable to recover it. 00:32:22.861 [2024-11-29 22:01:54.977210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.861 [2024-11-29 22:01:54.977254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.861 [2024-11-29 22:01:54.977271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.861 [2024-11-29 22:01:54.977279] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.861 [2024-11-29 22:01:54.977288] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.861 [2024-11-29 22:01:54.987291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.861 qpair failed and we were unable to recover it. 00:32:22.861 [2024-11-29 22:01:54.997131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.861 [2024-11-29 22:01:54.997172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.861 [2024-11-29 22:01:54.997189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.861 [2024-11-29 22:01:54.997198] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.861 [2024-11-29 22:01:54.997206] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.861 [2024-11-29 22:01:55.007340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.861 qpair failed and we were unable to recover it. 00:32:22.861 [2024-11-29 22:01:55.017427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.861 [2024-11-29 22:01:55.017464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.861 [2024-11-29 22:01:55.017482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.861 [2024-11-29 22:01:55.017491] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.861 [2024-11-29 22:01:55.017500] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.861 [2024-11-29 22:01:55.027672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.861 qpair failed and we were unable to recover it. 00:32:22.861 [2024-11-29 22:01:55.037160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.861 [2024-11-29 22:01:55.037201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.861 [2024-11-29 22:01:55.037218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.861 [2024-11-29 22:01:55.037227] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.861 [2024-11-29 22:01:55.037235] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.861 [2024-11-29 22:01:55.047570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.861 qpair failed and we were unable to recover it. 00:32:22.861 [2024-11-29 22:01:55.057461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.861 [2024-11-29 22:01:55.057506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.861 [2024-11-29 22:01:55.057524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.861 [2024-11-29 22:01:55.057536] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.861 [2024-11-29 22:01:55.057545] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.861 [2024-11-29 22:01:55.067658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.861 qpair failed and we were unable to recover it. 00:32:22.861 [2024-11-29 22:01:55.077338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.861 [2024-11-29 22:01:55.077382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.861 [2024-11-29 22:01:55.077400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.861 [2024-11-29 22:01:55.077409] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.861 [2024-11-29 22:01:55.077418] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:22.861 [2024-11-29 22:01:55.087846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:22.861 qpair failed and we were unable to recover it. 00:32:22.861 [2024-11-29 22:01:55.097511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:22.861 [2024-11-29 22:01:55.097553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:22.861 [2024-11-29 22:01:55.097571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:22.861 [2024-11-29 22:01:55.097581] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:22.861 [2024-11-29 22:01:55.097589] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.118 [2024-11-29 22:01:55.107908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.118 qpair failed and we were unable to recover it. 00:32:23.118 [2024-11-29 22:01:55.117620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.119 [2024-11-29 22:01:55.117661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.119 [2024-11-29 22:01:55.117684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.119 [2024-11-29 22:01:55.117693] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.119 [2024-11-29 22:01:55.117702] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.119 [2024-11-29 22:01:55.127912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.119 qpair failed and we were unable to recover it. 00:32:23.119 [2024-11-29 22:01:55.137763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.119 [2024-11-29 22:01:55.137802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.119 [2024-11-29 22:01:55.137820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.119 [2024-11-29 22:01:55.137829] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.119 [2024-11-29 22:01:55.137837] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.119 [2024-11-29 22:01:55.147931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.119 qpair failed and we were unable to recover it. 00:32:23.119 [2024-11-29 22:01:55.157588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.119 [2024-11-29 22:01:55.157630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.119 [2024-11-29 22:01:55.157647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.119 [2024-11-29 22:01:55.157656] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.119 [2024-11-29 22:01:55.157664] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.119 [2024-11-29 22:01:55.168011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.119 qpair failed and we were unable to recover it. 00:32:23.119 [2024-11-29 22:01:55.177844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.119 [2024-11-29 22:01:55.177887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.119 [2024-11-29 22:01:55.177904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.119 [2024-11-29 22:01:55.177913] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.119 [2024-11-29 22:01:55.177922] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.119 [2024-11-29 22:01:55.188153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.119 qpair failed and we were unable to recover it. 00:32:23.119 [2024-11-29 22:01:55.197656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.119 [2024-11-29 22:01:55.197703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.119 [2024-11-29 22:01:55.197720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.119 [2024-11-29 22:01:55.197728] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.119 [2024-11-29 22:01:55.197737] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.119 [2024-11-29 22:01:55.208153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.119 qpair failed and we were unable to recover it. 00:32:23.119 [2024-11-29 22:01:55.218069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.119 [2024-11-29 22:01:55.218108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.119 [2024-11-29 22:01:55.218127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.119 [2024-11-29 22:01:55.218136] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.119 [2024-11-29 22:01:55.218144] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.119 [2024-11-29 22:01:55.228250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.119 qpair failed and we were unable to recover it. 00:32:23.119 [2024-11-29 22:01:55.237801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.119 [2024-11-29 22:01:55.237845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.119 [2024-11-29 22:01:55.237865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.119 [2024-11-29 22:01:55.237874] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.119 [2024-11-29 22:01:55.237883] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.119 [2024-11-29 22:01:55.248215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.119 qpair failed and we were unable to recover it. 00:32:23.119 [2024-11-29 22:01:55.258004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.119 [2024-11-29 22:01:55.258040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.119 [2024-11-29 22:01:55.258058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.119 [2024-11-29 22:01:55.258067] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.119 [2024-11-29 22:01:55.258075] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.119 [2024-11-29 22:01:55.268323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.119 qpair failed and we were unable to recover it. 00:32:23.119 [2024-11-29 22:01:55.277997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.119 [2024-11-29 22:01:55.278037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.119 [2024-11-29 22:01:55.278055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.119 [2024-11-29 22:01:55.278064] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.119 [2024-11-29 22:01:55.278072] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.119 [2024-11-29 22:01:55.288287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.119 qpair failed and we were unable to recover it. 00:32:23.119 [2024-11-29 22:01:55.298289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.119 [2024-11-29 22:01:55.298326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.119 [2024-11-29 22:01:55.298343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.119 [2024-11-29 22:01:55.298352] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.119 [2024-11-29 22:01:55.298360] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.119 [2024-11-29 22:01:55.308552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.119 qpair failed and we were unable to recover it. 00:32:23.119 [2024-11-29 22:01:55.318337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.119 [2024-11-29 22:01:55.318381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.119 [2024-11-29 22:01:55.318400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.119 [2024-11-29 22:01:55.318409] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.119 [2024-11-29 22:01:55.318418] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.119 [2024-11-29 22:01:55.328419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.119 qpair failed and we were unable to recover it. 00:32:23.119 [2024-11-29 22:01:55.338019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.119 [2024-11-29 22:01:55.338064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.119 [2024-11-29 22:01:55.338081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.119 [2024-11-29 22:01:55.338090] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.119 [2024-11-29 22:01:55.338099] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.119 [2024-11-29 22:01:55.348617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.119 qpair failed and we were unable to recover it. 00:32:23.119 [2024-11-29 22:01:55.358275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.119 [2024-11-29 22:01:55.358314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.119 [2024-11-29 22:01:55.358331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.119 [2024-11-29 22:01:55.358340] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.119 [2024-11-29 22:01:55.358348] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.377 [2024-11-29 22:01:55.368721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.377 qpair failed and we were unable to recover it. 00:32:23.377 [2024-11-29 22:01:55.378320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.377 [2024-11-29 22:01:55.378367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.377 [2024-11-29 22:01:55.378384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.377 [2024-11-29 22:01:55.378393] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.377 [2024-11-29 22:01:55.378402] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.377 [2024-11-29 22:01:55.388704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.377 qpair failed and we were unable to recover it. 00:32:23.377 [2024-11-29 22:01:55.398543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.377 [2024-11-29 22:01:55.398586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.377 [2024-11-29 22:01:55.398603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.377 [2024-11-29 22:01:55.398613] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.377 [2024-11-29 22:01:55.398622] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.377 [2024-11-29 22:01:55.408757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.377 qpair failed and we were unable to recover it. 00:32:23.378 [2024-11-29 22:01:55.418430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.378 [2024-11-29 22:01:55.418475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.378 [2024-11-29 22:01:55.418496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.378 [2024-11-29 22:01:55.418505] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.378 [2024-11-29 22:01:55.418514] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.378 [2024-11-29 22:01:55.428599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.378 qpair failed and we were unable to recover it. 00:32:23.378 [2024-11-29 22:01:55.438468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.378 [2024-11-29 22:01:55.438510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.378 [2024-11-29 22:01:55.438527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.378 [2024-11-29 22:01:55.438536] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.378 [2024-11-29 22:01:55.438545] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.378 [2024-11-29 22:01:55.448643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.378 qpair failed and we were unable to recover it. 00:32:23.378 [2024-11-29 22:01:55.458541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.378 [2024-11-29 22:01:55.458585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.378 [2024-11-29 22:01:55.458602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.378 [2024-11-29 22:01:55.458611] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.378 [2024-11-29 22:01:55.458620] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.378 [2024-11-29 22:01:55.468800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.378 qpair failed and we were unable to recover it. 00:32:23.378 [2024-11-29 22:01:55.478613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.378 [2024-11-29 22:01:55.478660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.378 [2024-11-29 22:01:55.478682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.378 [2024-11-29 22:01:55.478691] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.378 [2024-11-29 22:01:55.478700] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.378 [2024-11-29 22:01:55.488787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.378 qpair failed and we were unable to recover it. 00:32:23.378 [2024-11-29 22:01:55.498701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.378 [2024-11-29 22:01:55.498736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.378 [2024-11-29 22:01:55.498753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.378 [2024-11-29 22:01:55.498762] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.378 [2024-11-29 22:01:55.498774] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.378 [2024-11-29 22:01:55.508915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.378 qpair failed and we were unable to recover it. 00:32:23.378 [2024-11-29 22:01:55.518692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.378 [2024-11-29 22:01:55.518734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.378 [2024-11-29 22:01:55.518752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.378 [2024-11-29 22:01:55.518761] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.378 [2024-11-29 22:01:55.518770] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.378 [2024-11-29 22:01:55.528940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.378 qpair failed and we were unable to recover it. 00:32:23.378 [2024-11-29 22:01:55.538811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.378 [2024-11-29 22:01:55.538851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.378 [2024-11-29 22:01:55.538869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.378 [2024-11-29 22:01:55.538878] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.378 [2024-11-29 22:01:55.538887] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.378 [2024-11-29 22:01:55.548954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.378 qpair failed and we were unable to recover it. 00:32:23.378 [2024-11-29 22:01:55.558904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.378 [2024-11-29 22:01:55.558948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.378 [2024-11-29 22:01:55.558967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.378 [2024-11-29 22:01:55.558976] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.378 [2024-11-29 22:01:55.558985] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.378 [2024-11-29 22:01:55.569218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.378 qpair failed and we were unable to recover it. 00:32:23.378 [2024-11-29 22:01:55.578824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.378 [2024-11-29 22:01:55.578869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.378 [2024-11-29 22:01:55.578886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.378 [2024-11-29 22:01:55.578895] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.378 [2024-11-29 22:01:55.578905] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.378 [2024-11-29 22:01:55.589140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.378 qpair failed and we were unable to recover it. 00:32:23.378 [2024-11-29 22:01:55.598942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.378 [2024-11-29 22:01:55.598983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.378 [2024-11-29 22:01:55.599000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.378 [2024-11-29 22:01:55.599010] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.378 [2024-11-29 22:01:55.599018] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.379 [2024-11-29 22:01:55.609086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.379 qpair failed and we were unable to recover it. 00:32:23.379 [2024-11-29 22:01:55.618970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.379 [2024-11-29 22:01:55.619015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.379 [2024-11-29 22:01:55.619033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.379 [2024-11-29 22:01:55.619042] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.379 [2024-11-29 22:01:55.619051] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.637 [2024-11-29 22:01:55.629174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.637 qpair failed and we were unable to recover it. 00:32:23.637 [2024-11-29 22:01:55.639085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.637 [2024-11-29 22:01:55.639135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.637 [2024-11-29 22:01:55.639153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.637 [2024-11-29 22:01:55.639162] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.637 [2024-11-29 22:01:55.639170] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.637 [2024-11-29 22:01:55.649383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.637 qpair failed and we were unable to recover it. 00:32:23.637 [2024-11-29 22:01:55.659106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.637 [2024-11-29 22:01:55.659144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.637 [2024-11-29 22:01:55.659163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.637 [2024-11-29 22:01:55.659173] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.638 [2024-11-29 22:01:55.659181] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.638 [2024-11-29 22:01:55.669355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.638 qpair failed and we were unable to recover it. 00:32:23.638 [2024-11-29 22:01:55.679125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.638 [2024-11-29 22:01:55.679166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.638 [2024-11-29 22:01:55.679184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.638 [2024-11-29 22:01:55.679196] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.638 [2024-11-29 22:01:55.679205] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.638 [2024-11-29 22:01:55.689470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.638 qpair failed and we were unable to recover it. 00:32:23.638 [2024-11-29 22:01:55.699317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.638 [2024-11-29 22:01:55.699358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.638 [2024-11-29 22:01:55.699375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.638 [2024-11-29 22:01:55.699385] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.638 [2024-11-29 22:01:55.699393] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.638 [2024-11-29 22:01:55.709454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.638 qpair failed and we were unable to recover it. 00:32:23.638 [2024-11-29 22:01:55.719243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.638 [2024-11-29 22:01:55.719285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.638 [2024-11-29 22:01:55.719302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.638 [2024-11-29 22:01:55.719312] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.638 [2024-11-29 22:01:55.719321] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.638 [2024-11-29 22:01:55.729462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.638 qpair failed and we were unable to recover it. 00:32:23.638 [2024-11-29 22:01:55.739286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.638 [2024-11-29 22:01:55.739330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.638 [2024-11-29 22:01:55.739347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.638 [2024-11-29 22:01:55.739357] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.638 [2024-11-29 22:01:55.739365] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.638 [2024-11-29 22:01:55.749768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.638 qpair failed and we were unable to recover it. 00:32:23.638 [2024-11-29 22:01:55.759326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.638 [2024-11-29 22:01:55.759370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.638 [2024-11-29 22:01:55.759388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.638 [2024-11-29 22:01:55.759397] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.638 [2024-11-29 22:01:55.759405] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.638 [2024-11-29 22:01:55.769556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.638 qpair failed and we were unable to recover it. 00:32:23.638 [2024-11-29 22:01:55.779308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.638 [2024-11-29 22:01:55.779350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.638 [2024-11-29 22:01:55.779367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.638 [2024-11-29 22:01:55.779376] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.638 [2024-11-29 22:01:55.779385] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.638 [2024-11-29 22:01:55.789672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.638 qpair failed and we were unable to recover it. 00:32:23.638 [2024-11-29 22:01:55.799479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.638 [2024-11-29 22:01:55.799517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.638 [2024-11-29 22:01:55.799534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.638 [2024-11-29 22:01:55.799544] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.638 [2024-11-29 22:01:55.799553] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.638 [2024-11-29 22:01:55.809664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.638 qpair failed and we were unable to recover it. 00:32:23.638 [2024-11-29 22:01:55.819462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.638 [2024-11-29 22:01:55.819509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.638 [2024-11-29 22:01:55.819527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.638 [2024-11-29 22:01:55.819536] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.638 [2024-11-29 22:01:55.819544] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.638 [2024-11-29 22:01:55.829861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.638 qpair failed and we were unable to recover it. 00:32:23.638 [2024-11-29 22:01:55.839571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.638 [2024-11-29 22:01:55.839615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.638 [2024-11-29 22:01:55.839633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.638 [2024-11-29 22:01:55.839642] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.638 [2024-11-29 22:01:55.839650] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.638 [2024-11-29 22:01:55.849838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.638 qpair failed and we were unable to recover it. 00:32:23.638 [2024-11-29 22:01:55.859620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.638 [2024-11-29 22:01:55.859662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.639 [2024-11-29 22:01:55.859688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.639 [2024-11-29 22:01:55.859697] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.639 [2024-11-29 22:01:55.859706] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.639 [2024-11-29 22:01:55.870013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.639 qpair failed and we were unable to recover it. 00:32:23.639 [2024-11-29 22:01:55.879616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.639 [2024-11-29 22:01:55.879661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.639 [2024-11-29 22:01:55.879683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.639 [2024-11-29 22:01:55.879692] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.639 [2024-11-29 22:01:55.879701] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.897 [2024-11-29 22:01:55.889955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.897 qpair failed and we were unable to recover it. 00:32:23.897 [2024-11-29 22:01:55.899744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.897 [2024-11-29 22:01:55.899786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.897 [2024-11-29 22:01:55.899804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.897 [2024-11-29 22:01:55.899813] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.897 [2024-11-29 22:01:55.899822] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.897 [2024-11-29 22:01:55.910064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.897 qpair failed and we were unable to recover it. 00:32:23.897 [2024-11-29 22:01:55.919843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.897 [2024-11-29 22:01:55.919887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.897 [2024-11-29 22:01:55.919905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.897 [2024-11-29 22:01:55.919914] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.897 [2024-11-29 22:01:55.919923] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.897 [2024-11-29 22:01:55.930050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.897 qpair failed and we were unable to recover it. 00:32:23.897 [2024-11-29 22:01:55.939873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.897 [2024-11-29 22:01:55.939913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.897 [2024-11-29 22:01:55.939931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.897 [2024-11-29 22:01:55.939940] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.897 [2024-11-29 22:01:55.939952] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.898 [2024-11-29 22:01:55.950025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.898 qpair failed and we were unable to recover it. 00:32:23.898 [2024-11-29 22:01:55.959905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.898 [2024-11-29 22:01:55.959942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.898 [2024-11-29 22:01:55.959959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.898 [2024-11-29 22:01:55.959968] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.898 [2024-11-29 22:01:55.959977] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.898 [2024-11-29 22:01:55.970192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.898 qpair failed and we were unable to recover it. 00:32:23.898 [2024-11-29 22:01:55.979927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.898 [2024-11-29 22:01:55.979966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.898 [2024-11-29 22:01:55.979983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.898 [2024-11-29 22:01:55.979993] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.898 [2024-11-29 22:01:55.980001] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.898 [2024-11-29 22:01:55.990294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.898 qpair failed and we were unable to recover it. 00:32:23.898 [2024-11-29 22:01:56.000174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.898 [2024-11-29 22:01:56.000214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.898 [2024-11-29 22:01:56.000231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.898 [2024-11-29 22:01:56.000240] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.898 [2024-11-29 22:01:56.000248] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.898 [2024-11-29 22:01:56.010359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.898 qpair failed and we were unable to recover it. 00:32:23.898 [2024-11-29 22:01:56.020179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.898 [2024-11-29 22:01:56.020220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.898 [2024-11-29 22:01:56.020238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.898 [2024-11-29 22:01:56.020247] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.898 [2024-11-29 22:01:56.020255] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.898 [2024-11-29 22:01:56.030524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.898 qpair failed and we were unable to recover it. 00:32:23.898 [2024-11-29 22:01:56.040165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.898 [2024-11-29 22:01:56.040207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.898 [2024-11-29 22:01:56.040224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.898 [2024-11-29 22:01:56.040234] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.898 [2024-11-29 22:01:56.040242] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.898 [2024-11-29 22:01:56.050343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.898 qpair failed and we were unable to recover it. 00:32:23.898 [2024-11-29 22:01:56.060261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.898 [2024-11-29 22:01:56.060298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.898 [2024-11-29 22:01:56.060315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.898 [2024-11-29 22:01:56.060324] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.898 [2024-11-29 22:01:56.060333] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.898 [2024-11-29 22:01:56.070504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.898 qpair failed and we were unable to recover it. 00:32:23.898 [2024-11-29 22:01:56.080260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.898 [2024-11-29 22:01:56.080302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.898 [2024-11-29 22:01:56.080319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.898 [2024-11-29 22:01:56.080329] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.898 [2024-11-29 22:01:56.080337] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.898 [2024-11-29 22:01:56.090735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.898 qpair failed and we were unable to recover it. 00:32:23.898 [2024-11-29 22:01:56.100404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.898 [2024-11-29 22:01:56.100446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.898 [2024-11-29 22:01:56.100463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.898 [2024-11-29 22:01:56.100472] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.898 [2024-11-29 22:01:56.100480] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.898 [2024-11-29 22:01:56.110647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.898 qpair failed and we were unable to recover it. 00:32:23.898 [2024-11-29 22:01:56.120423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.898 [2024-11-29 22:01:56.120463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.898 [2024-11-29 22:01:56.120480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.898 [2024-11-29 22:01:56.120492] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.898 [2024-11-29 22:01:56.120500] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:23.898 [2024-11-29 22:01:56.130646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:23.898 qpair failed and we were unable to recover it. 00:32:23.898 [2024-11-29 22:01:56.140450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:23.898 [2024-11-29 22:01:56.140485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:23.898 [2024-11-29 22:01:56.140502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:23.898 [2024-11-29 22:01:56.140511] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:23.898 [2024-11-29 22:01:56.140520] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.157 [2024-11-29 22:01:56.150775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.157 qpair failed and we were unable to recover it. 00:32:24.157 [2024-11-29 22:01:56.160400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.157 [2024-11-29 22:01:56.160441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.157 [2024-11-29 22:01:56.160459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.157 [2024-11-29 22:01:56.160468] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.157 [2024-11-29 22:01:56.160476] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.157 [2024-11-29 22:01:56.170731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.157 qpair failed and we were unable to recover it. 00:32:24.157 [2024-11-29 22:01:56.180672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.157 [2024-11-29 22:01:56.180718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.157 [2024-11-29 22:01:56.180735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.157 [2024-11-29 22:01:56.180744] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.157 [2024-11-29 22:01:56.180753] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.157 [2024-11-29 22:01:56.190939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.157 qpair failed and we were unable to recover it. 00:32:24.157 [2024-11-29 22:01:56.200681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.157 [2024-11-29 22:01:56.200716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.157 [2024-11-29 22:01:56.200733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.157 [2024-11-29 22:01:56.200742] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.157 [2024-11-29 22:01:56.200751] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.157 [2024-11-29 22:01:56.211050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.157 qpair failed and we were unable to recover it. 00:32:24.157 [2024-11-29 22:01:56.220921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.157 [2024-11-29 22:01:56.220962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.157 [2024-11-29 22:01:56.220979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.157 [2024-11-29 22:01:56.220988] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.157 [2024-11-29 22:01:56.220997] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.157 [2024-11-29 22:01:56.230960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.157 qpair failed and we were unable to recover it. 00:32:24.157 [2024-11-29 22:01:56.240783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.158 [2024-11-29 22:01:56.240827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.158 [2024-11-29 22:01:56.240844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.158 [2024-11-29 22:01:56.240854] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.158 [2024-11-29 22:01:56.240862] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.158 [2024-11-29 22:01:56.250977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.158 qpair failed and we were unable to recover it. 00:32:24.158 [2024-11-29 22:01:56.260927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.158 [2024-11-29 22:01:56.260975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.158 [2024-11-29 22:01:56.260992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.158 [2024-11-29 22:01:56.261002] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.158 [2024-11-29 22:01:56.261011] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.158 [2024-11-29 22:01:56.271129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.158 qpair failed and we were unable to recover it. 00:32:24.158 [2024-11-29 22:01:56.280894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.158 [2024-11-29 22:01:56.280938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.158 [2024-11-29 22:01:56.280956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.158 [2024-11-29 22:01:56.280965] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.158 [2024-11-29 22:01:56.280974] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.158 [2024-11-29 22:01:56.291224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.158 qpair failed and we were unable to recover it. 00:32:24.158 [2024-11-29 22:01:56.300953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.158 [2024-11-29 22:01:56.300993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.158 [2024-11-29 22:01:56.301014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.158 [2024-11-29 22:01:56.301023] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.158 [2024-11-29 22:01:56.301031] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.158 [2024-11-29 22:01:56.311157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.158 qpair failed and we were unable to recover it. 00:32:24.158 [2024-11-29 22:01:56.320950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.158 [2024-11-29 22:01:56.320991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.158 [2024-11-29 22:01:56.321009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.158 [2024-11-29 22:01:56.321018] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.158 [2024-11-29 22:01:56.321027] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.158 [2024-11-29 22:01:56.331314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.158 qpair failed and we were unable to recover it. 00:32:24.158 [2024-11-29 22:01:56.341128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.158 [2024-11-29 22:01:56.341172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.158 [2024-11-29 22:01:56.341189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.158 [2024-11-29 22:01:56.341198] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.158 [2024-11-29 22:01:56.341207] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.158 [2024-11-29 22:01:56.351470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.158 qpair failed and we were unable to recover it. 00:32:24.158 [2024-11-29 22:01:56.361081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.158 [2024-11-29 22:01:56.361122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.158 [2024-11-29 22:01:56.361140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.158 [2024-11-29 22:01:56.361149] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.158 [2024-11-29 22:01:56.361157] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.158 [2024-11-29 22:01:56.371418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.158 qpair failed and we were unable to recover it. 00:32:24.158 [2024-11-29 22:01:56.381298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.158 [2024-11-29 22:01:56.381341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.158 [2024-11-29 22:01:56.381358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.158 [2024-11-29 22:01:56.381367] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.158 [2024-11-29 22:01:56.381379] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.158 [2024-11-29 22:01:56.391509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.158 qpair failed and we were unable to recover it. 00:32:24.158 [2024-11-29 22:01:56.401293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.158 [2024-11-29 22:01:56.401333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.158 [2024-11-29 22:01:56.401351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.158 [2024-11-29 22:01:56.401360] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.158 [2024-11-29 22:01:56.401368] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.417 [2024-11-29 22:01:56.411455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.417 qpair failed and we were unable to recover it. 00:32:24.417 [2024-11-29 22:01:56.421355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.417 [2024-11-29 22:01:56.421398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.417 [2024-11-29 22:01:56.421416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.417 [2024-11-29 22:01:56.421425] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.417 [2024-11-29 22:01:56.421433] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.417 [2024-11-29 22:01:56.431643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.417 qpair failed and we were unable to recover it. 00:32:24.417 [2024-11-29 22:01:56.441415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.417 [2024-11-29 22:01:56.441456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.417 [2024-11-29 22:01:56.441474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.417 [2024-11-29 22:01:56.441483] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.417 [2024-11-29 22:01:56.441491] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.417 [2024-11-29 22:01:56.451638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.417 qpair failed and we were unable to recover it. 00:32:24.417 [2024-11-29 22:01:56.461424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.417 [2024-11-29 22:01:56.461469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.417 [2024-11-29 22:01:56.461486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.417 [2024-11-29 22:01:56.461495] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.417 [2024-11-29 22:01:56.461503] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.417 [2024-11-29 22:01:56.471737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.417 qpair failed and we were unable to recover it. 00:32:24.417 [2024-11-29 22:01:56.481435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.417 [2024-11-29 22:01:56.481479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.417 [2024-11-29 22:01:56.481496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.417 [2024-11-29 22:01:56.481506] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.417 [2024-11-29 22:01:56.481514] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.417 [2024-11-29 22:01:56.491697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.417 qpair failed and we were unable to recover it. 00:32:24.417 [2024-11-29 22:01:56.501506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.417 [2024-11-29 22:01:56.501546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.417 [2024-11-29 22:01:56.501563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.417 [2024-11-29 22:01:56.501572] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.417 [2024-11-29 22:01:56.501580] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.417 [2024-11-29 22:01:56.511931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.417 qpair failed and we were unable to recover it. 00:32:24.417 [2024-11-29 22:01:56.521658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.417 [2024-11-29 22:01:56.521700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.417 [2024-11-29 22:01:56.521717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.417 [2024-11-29 22:01:56.521726] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.417 [2024-11-29 22:01:56.521734] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.417 [2024-11-29 22:01:56.531962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.417 qpair failed and we were unable to recover it. 00:32:24.417 [2024-11-29 22:01:56.541637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.417 [2024-11-29 22:01:56.541684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.417 [2024-11-29 22:01:56.541702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.418 [2024-11-29 22:01:56.541711] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.418 [2024-11-29 22:01:56.541720] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.418 [2024-11-29 22:01:56.551772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.418 qpair failed and we were unable to recover it. 00:32:24.418 [2024-11-29 22:01:56.561770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.418 [2024-11-29 22:01:56.561811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.418 [2024-11-29 22:01:56.561830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.418 [2024-11-29 22:01:56.561842] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.418 [2024-11-29 22:01:56.561851] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.418 [2024-11-29 22:01:56.571979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.418 qpair failed and we were unable to recover it. 00:32:24.418 [2024-11-29 22:01:56.581905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.418 [2024-11-29 22:01:56.581952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.418 [2024-11-29 22:01:56.581970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.418 [2024-11-29 22:01:56.581980] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.418 [2024-11-29 22:01:56.581988] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.418 [2024-11-29 22:01:56.592089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.418 qpair failed and we were unable to recover it. 00:32:24.418 [2024-11-29 22:01:56.601741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.418 [2024-11-29 22:01:56.601784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.418 [2024-11-29 22:01:56.601802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.418 [2024-11-29 22:01:56.601811] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.418 [2024-11-29 22:01:56.601819] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.418 [2024-11-29 22:01:56.612188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.418 qpair failed and we were unable to recover it. 00:32:24.418 [2024-11-29 22:01:56.621899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.418 [2024-11-29 22:01:56.621937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.418 [2024-11-29 22:01:56.621955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.418 [2024-11-29 22:01:56.621964] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.418 [2024-11-29 22:01:56.621973] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.418 [2024-11-29 22:01:56.632310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.418 qpair failed and we were unable to recover it. 00:32:24.418 [2024-11-29 22:01:56.641962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.418 [2024-11-29 22:01:56.642004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.418 [2024-11-29 22:01:56.642021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.418 [2024-11-29 22:01:56.642031] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.418 [2024-11-29 22:01:56.642039] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.418 [2024-11-29 22:01:56.652254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.418 qpair failed and we were unable to recover it. 00:32:24.418 [2024-11-29 22:01:56.661983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.418 [2024-11-29 22:01:56.662024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.418 [2024-11-29 22:01:56.662042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.418 [2024-11-29 22:01:56.662051] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.418 [2024-11-29 22:01:56.662059] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.677 [2024-11-29 22:01:56.672300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.677 qpair failed and we were unable to recover it. 00:32:24.677 [2024-11-29 22:01:56.682036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.677 [2024-11-29 22:01:56.682076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.677 [2024-11-29 22:01:56.682093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.677 [2024-11-29 22:01:56.682102] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.677 [2024-11-29 22:01:56.682111] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.677 [2024-11-29 22:01:56.692428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.677 qpair failed and we were unable to recover it. 00:32:24.677 [2024-11-29 22:01:56.702201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.677 [2024-11-29 22:01:56.702240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.677 [2024-11-29 22:01:56.702257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.677 [2024-11-29 22:01:56.702266] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.677 [2024-11-29 22:01:56.702274] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.677 [2024-11-29 22:01:56.712347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.677 qpair failed and we were unable to recover it. 00:32:24.677 [2024-11-29 22:01:56.722209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.677 [2024-11-29 22:01:56.722251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.677 [2024-11-29 22:01:56.722268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.677 [2024-11-29 22:01:56.722277] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.677 [2024-11-29 22:01:56.722286] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.677 [2024-11-29 22:01:56.732623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.677 qpair failed and we were unable to recover it. 00:32:24.677 [2024-11-29 22:01:56.742314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.677 [2024-11-29 22:01:56.742359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.677 [2024-11-29 22:01:56.742382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.677 [2024-11-29 22:01:56.742391] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.677 [2024-11-29 22:01:56.742399] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.677 [2024-11-29 22:01:56.752511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.677 qpair failed and we were unable to recover it. 00:32:24.677 [2024-11-29 22:01:56.762304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.677 [2024-11-29 22:01:56.762349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.677 [2024-11-29 22:01:56.762366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.677 [2024-11-29 22:01:56.762375] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.677 [2024-11-29 22:01:56.762384] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.677 [2024-11-29 22:01:56.772598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.677 qpair failed and we were unable to recover it. 00:32:24.677 [2024-11-29 22:01:56.782365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.677 [2024-11-29 22:01:56.782406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.677 [2024-11-29 22:01:56.782424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.677 [2024-11-29 22:01:56.782433] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.677 [2024-11-29 22:01:56.782441] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.677 [2024-11-29 22:01:56.792813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.677 qpair failed and we were unable to recover it. 00:32:24.677 [2024-11-29 22:01:56.802375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.677 [2024-11-29 22:01:56.802417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.677 [2024-11-29 22:01:56.802434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.677 [2024-11-29 22:01:56.802443] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.677 [2024-11-29 22:01:56.802452] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.677 [2024-11-29 22:01:56.812651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.677 qpair failed and we were unable to recover it. 00:32:24.678 [2024-11-29 22:01:56.822585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.678 [2024-11-29 22:01:56.822630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.678 [2024-11-29 22:01:56.822648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.678 [2024-11-29 22:01:56.822657] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.678 [2024-11-29 22:01:56.822670] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.678 [2024-11-29 22:01:56.832790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.678 qpair failed and we were unable to recover it. 00:32:24.678 [2024-11-29 22:01:56.842529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.678 [2024-11-29 22:01:56.842572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.678 [2024-11-29 22:01:56.842589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.678 [2024-11-29 22:01:56.842598] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.678 [2024-11-29 22:01:56.842606] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.678 [2024-11-29 22:01:56.852895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.678 qpair failed and we were unable to recover it. 00:32:24.678 [2024-11-29 22:01:56.862629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.678 [2024-11-29 22:01:56.862672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.678 [2024-11-29 22:01:56.862688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.678 [2024-11-29 22:01:56.862698] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.678 [2024-11-29 22:01:56.862706] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.678 [2024-11-29 22:01:56.873030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.678 qpair failed and we were unable to recover it. 00:32:24.678 [2024-11-29 22:01:56.882824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.678 [2024-11-29 22:01:56.882866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.678 [2024-11-29 22:01:56.882883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.678 [2024-11-29 22:01:56.882892] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.678 [2024-11-29 22:01:56.882900] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.678 [2024-11-29 22:01:56.892975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.678 qpair failed and we were unable to recover it. 00:32:24.678 [2024-11-29 22:01:56.902693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.678 [2024-11-29 22:01:56.902739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.678 [2024-11-29 22:01:56.902756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.678 [2024-11-29 22:01:56.902765] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.678 [2024-11-29 22:01:56.902773] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.678 [2024-11-29 22:01:56.913028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.678 qpair failed and we were unable to recover it. 00:32:24.678 [2024-11-29 22:01:56.922923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.678 [2024-11-29 22:01:56.922970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.678 [2024-11-29 22:01:56.922987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.678 [2024-11-29 22:01:56.922996] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.678 [2024-11-29 22:01:56.923004] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.937 [2024-11-29 22:01:56.933197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.937 qpair failed and we were unable to recover it. 00:32:24.937 [2024-11-29 22:01:56.942913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.937 [2024-11-29 22:01:56.942952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.937 [2024-11-29 22:01:56.942969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.937 [2024-11-29 22:01:56.942979] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.937 [2024-11-29 22:01:56.942987] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.937 [2024-11-29 22:01:56.953211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.937 qpair failed and we were unable to recover it. 00:32:24.937 [2024-11-29 22:01:56.962975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.937 [2024-11-29 22:01:56.963017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.937 [2024-11-29 22:01:56.963033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.937 [2024-11-29 22:01:56.963042] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.937 [2024-11-29 22:01:56.963051] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.937 [2024-11-29 22:01:56.973222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.937 qpair failed and we were unable to recover it. 00:32:24.937 [2024-11-29 22:01:56.983036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.937 [2024-11-29 22:01:56.983080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.937 [2024-11-29 22:01:56.983098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.937 [2024-11-29 22:01:56.983107] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.937 [2024-11-29 22:01:56.983116] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.937 [2024-11-29 22:01:56.993308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.937 qpair failed and we were unable to recover it. 00:32:24.937 [2024-11-29 22:01:57.003200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.937 [2024-11-29 22:01:57.003241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.937 [2024-11-29 22:01:57.003259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.937 [2024-11-29 22:01:57.003271] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.937 [2024-11-29 22:01:57.003281] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.937 [2024-11-29 22:01:57.013337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.937 qpair failed and we were unable to recover it. 00:32:24.937 [2024-11-29 22:01:57.023155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.937 [2024-11-29 22:01:57.023193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.937 [2024-11-29 22:01:57.023211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.937 [2024-11-29 22:01:57.023220] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.937 [2024-11-29 22:01:57.023228] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.937 [2024-11-29 22:01:57.033593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.937 qpair failed and we were unable to recover it. 00:32:24.937 [2024-11-29 22:01:57.043262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.937 [2024-11-29 22:01:57.043304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.937 [2024-11-29 22:01:57.043321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.937 [2024-11-29 22:01:57.043330] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.937 [2024-11-29 22:01:57.043339] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.937 [2024-11-29 22:01:57.053599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.937 qpair failed and we were unable to recover it. 00:32:24.937 [2024-11-29 22:01:57.063397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.937 [2024-11-29 22:01:57.063440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.937 [2024-11-29 22:01:57.063457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.937 [2024-11-29 22:01:57.063466] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.937 [2024-11-29 22:01:57.063474] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.937 [2024-11-29 22:01:57.073523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.937 qpair failed and we were unable to recover it. 00:32:24.937 [2024-11-29 22:01:57.083524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.937 [2024-11-29 22:01:57.083566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.937 [2024-11-29 22:01:57.083583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.937 [2024-11-29 22:01:57.083592] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.937 [2024-11-29 22:01:57.083600] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.937 [2024-11-29 22:01:57.093497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.937 qpair failed and we were unable to recover it. 00:32:24.937 [2024-11-29 22:01:57.103336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.937 [2024-11-29 22:01:57.103380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.937 [2024-11-29 22:01:57.103397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.937 [2024-11-29 22:01:57.103406] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.937 [2024-11-29 22:01:57.103415] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.937 [2024-11-29 22:01:57.113642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.937 qpair failed and we were unable to recover it. 00:32:24.937 [2024-11-29 22:01:57.123592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.937 [2024-11-29 22:01:57.123631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.937 [2024-11-29 22:01:57.123648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.937 [2024-11-29 22:01:57.123657] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.937 [2024-11-29 22:01:57.123670] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.937 [2024-11-29 22:01:57.133908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.937 qpair failed and we were unable to recover it. 00:32:24.937 [2024-11-29 22:01:57.143367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.937 [2024-11-29 22:01:57.143405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.937 [2024-11-29 22:01:57.143423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.937 [2024-11-29 22:01:57.143432] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.937 [2024-11-29 22:01:57.143440] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.937 [2024-11-29 22:01:57.153844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.937 qpair failed and we were unable to recover it. 00:32:24.937 [2024-11-29 22:01:57.163629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:24.937 [2024-11-29 22:01:57.163673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:24.937 [2024-11-29 22:01:57.163690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:24.938 [2024-11-29 22:01:57.163699] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:24.938 [2024-11-29 22:01:57.163707] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:24.938 [2024-11-29 22:01:57.173750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.938 qpair failed and we were unable to recover it. 00:32:25.196 [2024-11-29 22:01:57.183550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.196 [2024-11-29 22:01:57.183593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.196 [2024-11-29 22:01:57.183613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.196 [2024-11-29 22:01:57.183623] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.196 [2024-11-29 22:01:57.183631] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.196 [2024-11-29 22:01:57.193798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.196 qpair failed and we were unable to recover it. 00:32:25.196 [2024-11-29 22:01:57.203662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.196 [2024-11-29 22:01:57.203707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.196 [2024-11-29 22:01:57.203724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.196 [2024-11-29 22:01:57.203733] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.196 [2024-11-29 22:01:57.203742] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.196 [2024-11-29 22:01:57.214077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.196 qpair failed and we were unable to recover it. 00:32:25.196 [2024-11-29 22:01:57.223720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.196 [2024-11-29 22:01:57.223765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.196 [2024-11-29 22:01:57.223782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.196 [2024-11-29 22:01:57.223791] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.196 [2024-11-29 22:01:57.223800] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.196 [2024-11-29 22:01:57.234102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.196 qpair failed and we were unable to recover it. 00:32:25.196 [2024-11-29 22:01:57.243822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.196 [2024-11-29 22:01:57.243864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.196 [2024-11-29 22:01:57.243881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.196 [2024-11-29 22:01:57.243890] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.196 [2024-11-29 22:01:57.243899] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.196 [2024-11-29 22:01:57.254062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.196 qpair failed and we were unable to recover it. 00:32:25.196 [2024-11-29 22:01:57.263825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.196 [2024-11-29 22:01:57.263864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.196 [2024-11-29 22:01:57.263882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.196 [2024-11-29 22:01:57.263891] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.196 [2024-11-29 22:01:57.263900] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.196 [2024-11-29 22:01:57.274069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.196 qpair failed and we were unable to recover it. 00:32:25.196 [2024-11-29 22:01:57.283923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.196 [2024-11-29 22:01:57.283965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.196 [2024-11-29 22:01:57.283982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.196 [2024-11-29 22:01:57.283991] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.196 [2024-11-29 22:01:57.283999] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.197 [2024-11-29 22:01:57.294242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.197 qpair failed and we were unable to recover it. 00:32:25.197 [2024-11-29 22:01:57.303997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.197 [2024-11-29 22:01:57.304040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.197 [2024-11-29 22:01:57.304058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.197 [2024-11-29 22:01:57.304067] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.197 [2024-11-29 22:01:57.304076] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.197 [2024-11-29 22:01:57.314216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.197 qpair failed and we were unable to recover it. 00:32:25.197 [2024-11-29 22:01:57.324124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.197 [2024-11-29 22:01:57.324163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.197 [2024-11-29 22:01:57.324181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.197 [2024-11-29 22:01:57.324190] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.197 [2024-11-29 22:01:57.324199] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.197 [2024-11-29 22:01:57.334175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.197 qpair failed and we were unable to recover it. 00:32:25.197 [2024-11-29 22:01:57.343984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.197 [2024-11-29 22:01:57.344028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.197 [2024-11-29 22:01:57.344046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.197 [2024-11-29 22:01:57.344055] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.197 [2024-11-29 22:01:57.344063] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.197 [2024-11-29 22:01:57.354370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.197 qpair failed and we were unable to recover it. 00:32:25.197 [2024-11-29 22:01:57.364114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.197 [2024-11-29 22:01:57.364156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.197 [2024-11-29 22:01:57.364177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.197 [2024-11-29 22:01:57.364186] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.197 [2024-11-29 22:01:57.364197] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.197 [2024-11-29 22:01:57.374359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.197 qpair failed and we were unable to recover it. 00:32:25.197 [2024-11-29 22:01:57.384317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.197 [2024-11-29 22:01:57.384366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.197 [2024-11-29 22:01:57.384383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.197 [2024-11-29 22:01:57.384392] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.197 [2024-11-29 22:01:57.384401] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.197 [2024-11-29 22:01:57.394583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.197 qpair failed and we were unable to recover it. 00:32:25.197 [2024-11-29 22:01:57.404335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.197 [2024-11-29 22:01:57.404378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.197 [2024-11-29 22:01:57.404395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.197 [2024-11-29 22:01:57.404405] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.197 [2024-11-29 22:01:57.404414] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.197 [2024-11-29 22:01:57.414178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.197 qpair failed and we were unable to recover it. 00:32:25.197 [2024-11-29 22:01:57.424231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.197 [2024-11-29 22:01:57.424272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.197 [2024-11-29 22:01:57.424290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.197 [2024-11-29 22:01:57.424300] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.197 [2024-11-29 22:01:57.424308] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.197 [2024-11-29 22:01:57.434520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.197 qpair failed and we were unable to recover it. 00:32:25.455 [2024-11-29 22:01:57.444255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.455 [2024-11-29 22:01:57.444298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.455 [2024-11-29 22:01:57.444316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.456 [2024-11-29 22:01:57.444326] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.456 [2024-11-29 22:01:57.444338] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.456 [2024-11-29 22:01:57.454550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.456 qpair failed and we were unable to recover it. 00:32:25.456 [2024-11-29 22:01:57.464346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.456 [2024-11-29 22:01:57.464391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.456 [2024-11-29 22:01:57.464408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.456 [2024-11-29 22:01:57.464417] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.456 [2024-11-29 22:01:57.464426] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.456 [2024-11-29 22:01:57.474843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.456 qpair failed and we were unable to recover it. 00:32:25.456 [2024-11-29 22:01:57.484400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.456 [2024-11-29 22:01:57.484443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.456 [2024-11-29 22:01:57.484460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.456 [2024-11-29 22:01:57.484469] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.456 [2024-11-29 22:01:57.484477] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.456 [2024-11-29 22:01:57.494745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.456 qpair failed and we were unable to recover it. 00:32:25.456 [2024-11-29 22:01:57.504486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.456 [2024-11-29 22:01:57.504530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.456 [2024-11-29 22:01:57.504547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.456 [2024-11-29 22:01:57.504556] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.456 [2024-11-29 22:01:57.504564] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.456 [2024-11-29 22:01:57.514658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.456 qpair failed and we were unable to recover it. 00:32:25.456 [2024-11-29 22:01:57.524474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.456 [2024-11-29 22:01:57.524516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.456 [2024-11-29 22:01:57.524534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.456 [2024-11-29 22:01:57.524542] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.456 [2024-11-29 22:01:57.524551] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.456 [2024-11-29 22:01:57.534849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.456 qpair failed and we were unable to recover it. 00:32:25.456 [2024-11-29 22:01:57.544651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.456 [2024-11-29 22:01:57.544702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.456 [2024-11-29 22:01:57.544719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.456 [2024-11-29 22:01:57.544728] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.456 [2024-11-29 22:01:57.544737] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.456 [2024-11-29 22:01:57.554804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.456 qpair failed and we were unable to recover it. 00:32:25.456 [2024-11-29 22:01:57.564713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.456 [2024-11-29 22:01:57.564757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.456 [2024-11-29 22:01:57.564774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.456 [2024-11-29 22:01:57.564783] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.456 [2024-11-29 22:01:57.564792] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.456 [2024-11-29 22:01:57.575185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.456 qpair failed and we were unable to recover it. 00:32:25.456 [2024-11-29 22:01:57.584711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.456 [2024-11-29 22:01:57.584748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.456 [2024-11-29 22:01:57.584765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.456 [2024-11-29 22:01:57.584775] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.456 [2024-11-29 22:01:57.584783] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.456 [2024-11-29 22:01:57.595165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.456 qpair failed and we were unable to recover it. 00:32:25.456 [2024-11-29 22:01:57.604833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.456 [2024-11-29 22:01:57.604874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.456 [2024-11-29 22:01:57.604893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.456 [2024-11-29 22:01:57.604903] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.456 [2024-11-29 22:01:57.604911] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.456 [2024-11-29 22:01:57.615192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.456 qpair failed and we were unable to recover it. 00:32:25.456 [2024-11-29 22:01:57.624864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.456 [2024-11-29 22:01:57.624913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.456 [2024-11-29 22:01:57.624931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.456 [2024-11-29 22:01:57.624943] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.456 [2024-11-29 22:01:57.624952] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.456 [2024-11-29 22:01:57.635221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.456 qpair failed and we were unable to recover it. 00:32:25.456 [2024-11-29 22:01:57.644912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.456 [2024-11-29 22:01:57.644956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.456 [2024-11-29 22:01:57.644974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.456 [2024-11-29 22:01:57.644983] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.456 [2024-11-29 22:01:57.644992] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.456 [2024-11-29 22:01:57.655357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.456 qpair failed and we were unable to recover it. 00:32:25.456 [2024-11-29 22:01:57.664973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.456 [2024-11-29 22:01:57.665015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.456 [2024-11-29 22:01:57.665032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.456 [2024-11-29 22:01:57.665041] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.456 [2024-11-29 22:01:57.665050] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.456 [2024-11-29 22:01:57.675263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.456 qpair failed and we were unable to recover it. 00:32:25.456 [2024-11-29 22:01:57.685045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.456 [2024-11-29 22:01:57.685086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.456 [2024-11-29 22:01:57.685103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.456 [2024-11-29 22:01:57.685112] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.456 [2024-11-29 22:01:57.685120] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.456 [2024-11-29 22:01:57.695449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.456 qpair failed and we were unable to recover it. 00:32:25.715 [2024-11-29 22:01:57.705173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.715 [2024-11-29 22:01:57.705217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.715 [2024-11-29 22:01:57.705235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.715 [2024-11-29 22:01:57.705244] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.715 [2024-11-29 22:01:57.705252] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.715 [2024-11-29 22:01:57.715403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.715 qpair failed and we were unable to recover it. 00:32:25.715 [2024-11-29 22:01:57.725243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.715 [2024-11-29 22:01:57.725283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.715 [2024-11-29 22:01:57.725300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.715 [2024-11-29 22:01:57.725309] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.715 [2024-11-29 22:01:57.725318] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.715 [2024-11-29 22:01:57.735375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.715 qpair failed and we were unable to recover it. 00:32:25.715 [2024-11-29 22:01:57.745209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.715 [2024-11-29 22:01:57.745247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.715 [2024-11-29 22:01:57.745264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.715 [2024-11-29 22:01:57.745273] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.715 [2024-11-29 22:01:57.745282] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.715 [2024-11-29 22:01:57.755611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.715 qpair failed and we were unable to recover it. 00:32:25.715 [2024-11-29 22:01:57.765339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.715 [2024-11-29 22:01:57.765379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.715 [2024-11-29 22:01:57.765395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.715 [2024-11-29 22:01:57.765404] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.715 [2024-11-29 22:01:57.765413] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.715 [2024-11-29 22:01:57.775621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.715 qpair failed and we were unable to recover it. 00:32:25.715 [2024-11-29 22:01:57.785308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.715 [2024-11-29 22:01:57.785354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.715 [2024-11-29 22:01:57.785371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.715 [2024-11-29 22:01:57.785380] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.715 [2024-11-29 22:01:57.785388] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.715 [2024-11-29 22:01:57.795704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.716 qpair failed and we were unable to recover it. 00:32:25.716 [2024-11-29 22:01:57.805425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.716 [2024-11-29 22:01:57.805466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.716 [2024-11-29 22:01:57.805486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.716 [2024-11-29 22:01:57.805496] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.716 [2024-11-29 22:01:57.805504] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.716 [2024-11-29 22:01:57.815785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.716 qpair failed and we were unable to recover it. 00:32:25.716 [2024-11-29 22:01:57.825382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.716 [2024-11-29 22:01:57.825423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.716 [2024-11-29 22:01:57.825441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.716 [2024-11-29 22:01:57.825450] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.716 [2024-11-29 22:01:57.825458] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.716 [2024-11-29 22:01:57.835811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.716 qpair failed and we were unable to recover it. 00:32:25.716 [2024-11-29 22:01:57.845534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.716 [2024-11-29 22:01:57.845577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.716 [2024-11-29 22:01:57.845594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.716 [2024-11-29 22:01:57.845603] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.716 [2024-11-29 22:01:57.845612] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.716 [2024-11-29 22:01:57.855950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.716 qpair failed and we were unable to recover it. 00:32:25.716 [2024-11-29 22:01:57.865535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.716 [2024-11-29 22:01:57.865577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.716 [2024-11-29 22:01:57.865593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.716 [2024-11-29 22:01:57.865602] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.716 [2024-11-29 22:01:57.865611] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.716 [2024-11-29 22:01:57.875968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.716 qpair failed and we were unable to recover it. 00:32:25.716 [2024-11-29 22:01:57.885570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.716 [2024-11-29 22:01:57.885608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.716 [2024-11-29 22:01:57.885625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.716 [2024-11-29 22:01:57.885634] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.716 [2024-11-29 22:01:57.885645] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.716 [2024-11-29 22:01:57.895967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.716 qpair failed and we were unable to recover it. 00:32:25.716 [2024-11-29 22:01:57.905712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.716 [2024-11-29 22:01:57.905752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.716 [2024-11-29 22:01:57.905769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.716 [2024-11-29 22:01:57.905778] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.716 [2024-11-29 22:01:57.905786] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.716 [2024-11-29 22:01:57.916039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.716 qpair failed and we were unable to recover it. 00:32:25.716 [2024-11-29 22:01:57.925809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.716 [2024-11-29 22:01:57.925850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.716 [2024-11-29 22:01:57.925867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.716 [2024-11-29 22:01:57.925876] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.716 [2024-11-29 22:01:57.925885] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.716 [2024-11-29 22:01:57.936125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.716 qpair failed and we were unable to recover it. 00:32:25.716 [2024-11-29 22:01:57.945848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.716 [2024-11-29 22:01:57.945890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.716 [2024-11-29 22:01:57.945908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.716 [2024-11-29 22:01:57.945917] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.716 [2024-11-29 22:01:57.945925] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.716 [2024-11-29 22:01:57.956202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.716 qpair failed and we were unable to recover it. 00:32:25.974 [2024-11-29 22:01:57.965850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.974 [2024-11-29 22:01:57.965887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.974 [2024-11-29 22:01:57.965903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.974 [2024-11-29 22:01:57.965913] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.974 [2024-11-29 22:01:57.965921] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.974 [2024-11-29 22:01:57.976095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.974 qpair failed and we were unable to recover it. 00:32:25.975 [2024-11-29 22:01:57.985967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.975 [2024-11-29 22:01:57.986015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.975 [2024-11-29 22:01:57.986043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.975 [2024-11-29 22:01:57.986057] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.975 [2024-11-29 22:01:57.986069] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:32:25.975 [2024-11-29 22:01:57.996263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:25.975 qpair failed and we were unable to recover it. 00:32:25.975 [2024-11-29 22:01:58.005975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.975 [2024-11-29 22:01:58.006016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.975 [2024-11-29 22:01:58.006033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.975 [2024-11-29 22:01:58.006042] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.975 [2024-11-29 22:01:58.006051] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:32:25.975 [2024-11-29 22:01:58.016141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:25.975 qpair failed and we were unable to recover it. 00:32:25.975 [2024-11-29 22:01:58.016273] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:32:25.975 A controller has encountered a failure and is being reset. 00:32:25.975 [2024-11-29 22:01:58.026100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.975 [2024-11-29 22:01:58.026143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.975 [2024-11-29 22:01:58.026169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.975 [2024-11-29 22:01:58.026183] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.975 [2024-11-29 22:01:58.026195] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.975 [2024-11-29 22:01:58.036424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.975 qpair failed and we were unable to recover it. 00:32:25.975 [2024-11-29 22:01:58.046113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:25.975 [2024-11-29 22:01:58.046158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:25.975 [2024-11-29 22:01:58.046177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:25.975 [2024-11-29 22:01:58.046186] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:25.975 [2024-11-29 22:01:58.046195] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:25.975 [2024-11-29 22:01:58.056365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:25.975 qpair failed and we were unable to recover it. 00:32:25.975 [2024-11-29 22:01:58.056530] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:25.975 [2024-11-29 22:01:58.058546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:25.975 Controller properly reset. 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Read completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Read completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Read completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Read completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Read completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Read completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Read completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Read completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Read completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Read completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Read completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Read completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Read completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 Write completed with error (sct=0, sc=8) 00:32:26.909 starting I/O failed 00:32:26.909 [2024-11-29 22:01:59.081163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:26.909 Initializing NVMe Controllers 00:32:26.909 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:26.909 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:26.909 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:26.909 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:26.909 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:26.909 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:26.909 Initialization complete. Launching workers. 00:32:26.909 Starting thread on core 1 00:32:26.909 Starting thread on core 2 00:32:26.909 Starting thread on core 3 00:32:26.909 Starting thread on core 0 00:32:26.909 22:01:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:32:26.909 00:32:26.909 real 0m12.492s 00:32:26.909 user 0m27.256s 00:32:26.909 sys 0m3.078s 00:32:26.909 22:01:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:26.909 22:01:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:26.909 ************************************ 00:32:26.909 END TEST nvmf_target_disconnect_tc2 00:32:26.909 ************************************ 00:32:27.168 22:01:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:32:27.168 22:01:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:32:27.168 22:01:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:27.168 22:01:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:27.168 22:01:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:27.168 ************************************ 00:32:27.168 START TEST nvmf_target_disconnect_tc3 00:32:27.168 ************************************ 00:32:27.168 22:01:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc3 00:32:27.168 22:01:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3231334 00:32:27.168 22:01:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:32:27.168 22:01:59 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:32:29.143 22:02:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3229984 00:32:29.143 22:02:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Read completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Read completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Read completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Read completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Read completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Read completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Read completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Read completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Read completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Read completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Read completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Read completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Read completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 Write completed with error (sct=0, sc=8) 00:32:30.539 starting I/O failed 00:32:30.539 [2024-11-29 22:02:02.395828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:31.107 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3229984 Killed "${NVMF_APP[@]}" "$@" 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@505 -- # nvmfpid=3231892 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@504 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@506 -- # waitforlisten 3231892 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3231892 ']' 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:31.107 22:02:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:31.107 [2024-11-29 22:02:03.272390] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:31.107 [2024-11-29 22:02:03.272444] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.365 [2024-11-29 22:02:03.363299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:31.365 Write completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Write completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Write completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Write completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Write completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Write completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Write completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Write completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Write completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Write completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Write completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Write completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Write completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 Read completed with error (sct=0, sc=8) 00:32:31.365 starting I/O failed 00:32:31.365 [2024-11-29 22:02:03.400969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:31.365 [2024-11-29 22:02:03.400989] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.365 [2024-11-29 22:02:03.401024] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.365 [2024-11-29 22:02:03.401053] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.365 [2024-11-29 22:02:03.401062] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.365 [2024-11-29 22:02:03.401069] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.365 [2024-11-29 22:02:03.401187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:32:31.365 [2024-11-29 22:02:03.401301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:32:31.365 [2024-11-29 22:02:03.401409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:32:31.365 [2024-11-29 22:02:03.401410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:32:31.931 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:31.931 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # return 0 00:32:31.931 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:31.931 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:31.931 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:31.931 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.931 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:31.931 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.931 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:32.190 Malloc0 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:32.190 [2024-11-29 22:02:04.209704] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe9d8f0/0xea95d0) succeed. 00:32:32.190 [2024-11-29 22:02:04.220441] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe9eee0/0xeeac70) succeed. 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:32.190 [2024-11-29 22:02:04.364436] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.190 22:02:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3231334 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Read completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 Write completed with error (sct=0, sc=8) 00:32:32.190 starting I/O failed 00:32:32.190 [2024-11-29 22:02:04.405980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Write completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 Read completed with error (sct=0, sc=8) 00:32:33.564 starting I/O failed 00:32:33.564 [2024-11-29 22:02:05.410948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:33.564 [2024-11-29 22:02:05.412578] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:33.564 [2024-11-29 22:02:05.412599] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:33.564 [2024-11-29 22:02:05.412608] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cff40 00:32:34.500 [2024-11-29 22:02:06.416485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:34.500 qpair failed and we were unable to recover it. 00:32:34.500 [2024-11-29 22:02:06.417987] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:34.500 [2024-11-29 22:02:06.418005] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:34.500 [2024-11-29 22:02:06.418014] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cff40 00:32:35.436 [2024-11-29 22:02:07.421913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:35.437 qpair failed and we were unable to recover it. 00:32:35.437 [2024-11-29 22:02:07.423632] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:35.437 [2024-11-29 22:02:07.423651] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:35.437 [2024-11-29 22:02:07.423659] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cff40 00:32:36.372 [2024-11-29 22:02:08.427543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:36.372 qpair failed and we were unable to recover it. 00:32:36.372 [2024-11-29 22:02:08.429051] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:36.372 [2024-11-29 22:02:08.429069] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:36.372 [2024-11-29 22:02:08.429077] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cff40 00:32:37.308 [2024-11-29 22:02:09.432957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:37.308 qpair failed and we were unable to recover it. 00:32:37.308 [2024-11-29 22:02:09.434495] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:37.308 [2024-11-29 22:02:09.434514] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:37.308 [2024-11-29 22:02:09.434522] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cff40 00:32:38.244 [2024-11-29 22:02:10.438359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:38.244 qpair failed and we were unable to recover it. 00:32:38.244 [2024-11-29 22:02:10.439992] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:38.244 [2024-11-29 22:02:10.440011] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:38.244 [2024-11-29 22:02:10.440020] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cff40 00:32:39.621 [2024-11-29 22:02:11.443877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:39.621 qpair failed and we were unable to recover it. 00:32:39.621 [2024-11-29 22:02:11.445512] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:39.621 [2024-11-29 22:02:11.445542] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:39.621 [2024-11-29 22:02:11.445554] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:40.555 [2024-11-29 22:02:12.449407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.555 qpair failed and we were unable to recover it. 00:32:40.555 [2024-11-29 22:02:12.450866] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:40.555 [2024-11-29 22:02:12.450885] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:40.555 [2024-11-29 22:02:12.450893] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000032f740 00:32:41.489 [2024-11-29 22:02:13.454694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.489 qpair failed and we were unable to recover it. 00:32:41.489 [2024-11-29 22:02:13.456325] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:41.489 [2024-11-29 22:02:13.456346] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:41.489 [2024-11-29 22:02:13.456355] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf7c0 00:32:42.423 [2024-11-29 22:02:14.460307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.423 qpair failed and we were unable to recover it. 00:32:42.423 [2024-11-29 22:02:14.462003] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:42.423 [2024-11-29 22:02:14.462020] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:42.423 [2024-11-29 22:02:14.462028] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf7c0 00:32:43.360 [2024-11-29 22:02:15.465961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:43.360 qpair failed and we were unable to recover it. 00:32:43.360 [2024-11-29 22:02:15.466044] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:32:43.360 A controller has encountered a failure and is being reset. 00:32:43.360 Resorting to new failover address 192.168.100.9 00:32:43.360 [2024-11-29 22:02:15.467829] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:43.360 [2024-11-29 22:02:15.467856] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:43.360 [2024-11-29 22:02:15.467868] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:32:44.295 [2024-11-29 22:02:16.471787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:44.295 qpair failed and we were unable to recover it. 00:32:44.295 [2024-11-29 22:02:16.473315] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:44.295 [2024-11-29 22:02:16.473333] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:44.295 [2024-11-29 22:02:16.473341] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:32:45.671 [2024-11-29 22:02:17.477295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:45.671 qpair failed and we were unable to recover it. 00:32:45.671 [2024-11-29 22:02:17.477414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.671 [2024-11-29 22:02:17.477517] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:45.671 [2024-11-29 22:02:17.509381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:45.671 Controller properly reset. 00:32:45.671 Initializing NVMe Controllers 00:32:45.671 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:45.671 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:45.671 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:45.671 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:45.671 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:45.671 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:45.671 Initialization complete. Launching workers. 00:32:45.671 Starting thread on core 1 00:32:45.671 Starting thread on core 2 00:32:45.671 Starting thread on core 3 00:32:45.671 Starting thread on core 0 00:32:45.671 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:32:45.671 00:32:45.671 real 0m18.347s 00:32:45.671 user 0m59.233s 00:32:45.671 sys 0m5.657s 00:32:45.671 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:45.671 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:45.671 ************************************ 00:32:45.671 END TEST nvmf_target_disconnect_tc3 00:32:45.671 ************************************ 00:32:45.671 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:32:45.671 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:32:45.671 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:45.671 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:32:45.671 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:45.671 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:45.671 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:32:45.671 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:45.671 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:45.671 rmmod nvme_rdma 00:32:45.671 rmmod nvme_fabrics 00:32:45.671 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 3231892 ']' 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 3231892 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3231892 ']' 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3231892 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3231892 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3231892' 00:32:45.672 killing process with pid 3231892 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3231892 00:32:45.672 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3231892 00:32:45.931 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:45.931 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:32:45.931 00:32:45.931 real 0m39.613s 00:32:45.931 user 2m27.263s 00:32:45.931 sys 0m14.695s 00:32:45.931 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:45.931 22:02:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:45.931 ************************************ 00:32:45.931 END TEST nvmf_target_disconnect 00:32:45.931 ************************************ 00:32:45.931 22:02:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:45.931 00:32:45.931 real 7m15.863s 00:32:45.931 user 20m43.292s 00:32:45.931 sys 1m42.073s 00:32:45.931 22:02:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:45.931 22:02:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.931 ************************************ 00:32:45.931 END TEST nvmf_host 00:32:45.931 ************************************ 00:32:45.931 22:02:18 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:32:45.931 00:32:45.931 real 26m36.061s 00:32:45.931 user 78m31.728s 00:32:45.931 sys 6m25.787s 00:32:45.931 22:02:18 nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:45.931 22:02:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:45.931 ************************************ 00:32:45.931 END TEST nvmf_rdma 00:32:45.931 ************************************ 00:32:45.932 22:02:18 -- spdk/autotest.sh@278 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:32:45.932 22:02:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:45.932 22:02:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:45.932 22:02:18 -- common/autotest_common.sh@10 -- # set +x 00:32:45.932 ************************************ 00:32:45.932 START TEST spdkcli_nvmf_rdma 00:32:45.932 ************************************ 00:32:45.932 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:32:46.192 * Looking for test storage... 00:32:46.192 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@1681 -- # lcov --version 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:46.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.192 --rc genhtml_branch_coverage=1 00:32:46.192 --rc genhtml_function_coverage=1 00:32:46.192 --rc genhtml_legend=1 00:32:46.192 --rc geninfo_all_blocks=1 00:32:46.192 --rc geninfo_unexecuted_blocks=1 00:32:46.192 00:32:46.192 ' 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:46.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.192 --rc genhtml_branch_coverage=1 00:32:46.192 --rc genhtml_function_coverage=1 00:32:46.192 --rc genhtml_legend=1 00:32:46.192 --rc geninfo_all_blocks=1 00:32:46.192 --rc geninfo_unexecuted_blocks=1 00:32:46.192 00:32:46.192 ' 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:46.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.192 --rc genhtml_branch_coverage=1 00:32:46.192 --rc genhtml_function_coverage=1 00:32:46.192 --rc genhtml_legend=1 00:32:46.192 --rc geninfo_all_blocks=1 00:32:46.192 --rc geninfo_unexecuted_blocks=1 00:32:46.192 00:32:46.192 ' 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:46.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.192 --rc genhtml_branch_coverage=1 00:32:46.192 --rc genhtml_function_coverage=1 00:32:46.192 --rc genhtml_legend=1 00:32:46.192 --rc geninfo_all_blocks=1 00:32:46.192 --rc geninfo_unexecuted_blocks=1 00:32:46.192 00:32:46.192 ' 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.192 22:02:18 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:46.193 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3234518 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3234518 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # '[' -z 3234518 ']' 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:46.193 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:46.193 [2024-11-29 22:02:18.428160] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:46.193 [2024-11-29 22:02:18.428218] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234518 ] 00:32:46.453 [2024-11-29 22:02:18.499000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:46.453 [2024-11-29 22:02:18.538315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.453 [2024-11-29 22:02:18.538317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # return 0 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@465 -- # '[' -z rdma ']' 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:32:46.453 22:02:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@345 -- # [[ rdma == rdma ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${x722[@]}") 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # pci_devs+=("${mlx[@]}") 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ mlx5 == mlx5 ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # pci_devs=("${mlx[@]}") 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:54.577 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@365 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:54.577 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # [[ mlx5_core == unknown ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@370 -- # [[ mlx5_core == unbound ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@374 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@375 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ rdma == rdma ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@386 -- # NVME_CONNECT='nvme connect -i 15' 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@396 -- # [[ mlx5 == e810 ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:54.577 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # [[ rdma == tcp ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:54.577 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # is_hw=yes 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # [[ rdma == tcp ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@443 -- # [[ rdma == rdma ]] 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # rdma_device_init 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@525 -- # load_ib_rdma_modules 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:54.577 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@526 -- # allocate_nic_ips 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:32:54.578 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:54.578 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:54.578 altname enp217s0f0np0 00:32:54.578 altname ens818f0np0 00:32:54.578 inet 192.168.100.8/24 scope global mlx_0_0 00:32:54.578 valid_lft forever preferred_lft forever 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:32:54.578 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:54.578 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:54.578 altname enp217s0f1np1 00:32:54.578 altname ens818f1np1 00:32:54.578 inet 192.168.100.9/24 scope global mlx_0_1 00:32:54.578 valid_lft forever preferred_lft forever 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # return 0 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@479 -- # [[ rdma == \r\d\m\a ]] 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@480 -- # get_available_rdma_ips 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@480 -- # RDMA_IP_LIST='192.168.100.8 00:32:54.578 192.168.100.9' 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@481 -- # echo '192.168.100.8 00:32:54.578 192.168.100.9' 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@481 -- # head -n 1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@481 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # echo '192.168.100.8 00:32:54.578 192.168.100.9' 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # tail -n +2 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # head -n 1 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # '[' -z 192.168.100.8 ']' 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' rdma == tcp ']' 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' rdma == rdma ']' 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- nvmf/common.sh@498 -- # modprobe nvme-rdma 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:54.578 22:02:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:54.578 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:54.578 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:54.578 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:54.578 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:54.578 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:54.578 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:54.578 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:32:54.578 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:32:54.578 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:32:54.578 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:32:54.579 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:54.579 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:54.579 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:54.579 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:54.579 ' 00:32:55.958 [2024-11-29 22:02:28.183403] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1543b20/0x142e200) succeed. 00:32:55.958 [2024-11-29 22:02:28.193067] rdma.c:2587:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x153ec40/0x146f8a0) succeed. 00:32:57.334 [2024-11-29 22:02:29.471221] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:32:59.869 [2024-11-29 22:02:31.718274] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:33:01.774 [2024-11-29 22:02:33.648562] rdma.c:3042:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:33:03.151 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:03.151 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:03.151 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:03.151 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:03.151 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:03.151 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:03.151 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:03.151 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:03.151 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:03.151 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:03.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:03.151 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:03.151 22:02:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:03.151 22:02:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:03.151 22:02:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:03.151 22:02:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:03.151 22:02:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:03.151 22:02:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:03.151 22:02:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:33:03.151 22:02:35 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:03.720 22:02:35 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:03.720 22:02:35 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:03.720 22:02:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:03.720 22:02:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:03.720 22:02:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:03.720 22:02:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:03.720 22:02:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:03.720 22:02:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:03.720 22:02:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:03.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:03.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:03.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:03.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:33:03.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:33:03.720 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:03.720 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:03.720 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:03.720 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:03.720 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:03.720 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:03.720 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:03.720 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:03.720 ' 00:33:08.996 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:08.996 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:08.996 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:08.996 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:08.996 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:33:08.996 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:33:08.996 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:08.996 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:08.996 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:08.996 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:08.996 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:08.996 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:08.996 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:08.996 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3234518 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # '[' -z 3234518 ']' 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # kill -0 3234518 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # uname 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3234518 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3234518' 00:33:08.996 killing process with pid 3234518 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@969 -- # kill 3234518 00:33:08.996 22:02:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@974 -- # wait 3234518 00:33:08.996 22:02:41 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:33:08.996 22:02:41 spdkcli_nvmf_rdma -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:08.996 22:02:41 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:33:08.996 22:02:41 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:08.996 22:02:41 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:08.996 22:02:41 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:33:08.996 22:02:41 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:08.996 22:02:41 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:08.996 rmmod nvme_rdma 00:33:09.256 rmmod nvme_fabrics 00:33:09.256 22:02:41 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:09.256 22:02:41 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:33:09.256 22:02:41 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:33:09.256 22:02:41 spdkcli_nvmf_rdma -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:33:09.256 22:02:41 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:09.256 22:02:41 spdkcli_nvmf_rdma -- nvmf/common.sh@519 -- # [[ rdma == \t\c\p ]] 00:33:09.256 00:33:09.256 real 0m23.139s 00:33:09.256 user 0m49.056s 00:33:09.256 sys 0m6.251s 00:33:09.256 22:02:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:09.256 22:02:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:09.256 ************************************ 00:33:09.256 END TEST spdkcli_nvmf_rdma 00:33:09.256 ************************************ 00:33:09.256 22:02:41 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:33:09.256 22:02:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:09.256 22:02:41 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:09.256 22:02:41 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:33:09.256 22:02:41 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:33:09.256 22:02:41 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:33:09.256 22:02:41 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:09.256 22:02:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:09.256 22:02:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:09.256 22:02:41 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:09.256 22:02:41 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:09.256 22:02:41 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:09.256 22:02:41 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:09.256 22:02:41 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:09.256 22:02:41 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:09.256 22:02:41 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:09.256 22:02:41 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:09.256 22:02:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:09.256 22:02:41 -- common/autotest_common.sh@10 -- # set +x 00:33:09.256 22:02:41 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:09.256 22:02:41 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:09.256 22:02:41 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:09.256 22:02:41 -- common/autotest_common.sh@10 -- # set +x 00:33:15.935 INFO: APP EXITING 00:33:15.935 INFO: killing all VMs 00:33:15.935 INFO: killing vhost app 00:33:15.935 INFO: EXIT DONE 00:33:17.839 Waiting for block devices as requested 00:33:17.839 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:17.839 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:18.098 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:18.098 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:18.098 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:18.098 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:18.358 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:18.358 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:18.358 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:18.617 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:18.617 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:18.617 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:18.877 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:18.877 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:18.877 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:19.137 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:19.137 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:33:22.431 Cleaning 00:33:22.431 Removing: /var/run/dpdk/spdk0/config 00:33:22.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:22.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:22.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:22.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:22.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:22.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:22.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:22.431 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:22.431 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:22.431 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:22.431 Removing: /var/run/dpdk/spdk1/config 00:33:22.431 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:22.431 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:22.431 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:22.431 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:22.431 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:22.431 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:22.431 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:22.431 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:22.431 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:22.431 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:22.431 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:22.431 Removing: /var/run/dpdk/spdk2/config 00:33:22.431 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:22.431 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:22.431 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:22.431 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:22.431 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:22.431 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:22.431 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:22.431 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:22.431 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:22.431 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:22.431 Removing: /var/run/dpdk/spdk3/config 00:33:22.431 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:22.431 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:22.431 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:22.431 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:22.431 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:22.431 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:22.431 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:22.431 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:22.431 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:22.431 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:22.431 Removing: /var/run/dpdk/spdk4/config 00:33:22.431 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:22.431 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:22.431 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:22.431 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:22.431 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:22.431 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:22.431 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:22.431 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:22.431 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:22.431 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:22.431 Removing: /dev/shm/bdevperf_trace.pid2882834 00:33:22.431 Removing: /dev/shm/bdevperf_trace.pid3128394 00:33:22.431 Removing: /dev/shm/bdev_svc_trace.1 00:33:22.431 Removing: /dev/shm/nvmf_trace.0 00:33:22.431 Removing: /dev/shm/spdk_tgt_trace.pid2838488 00:33:22.431 Removing: /var/run/dpdk/spdk0 00:33:22.431 Removing: /var/run/dpdk/spdk1 00:33:22.431 Removing: /var/run/dpdk/spdk2 00:33:22.431 Removing: /var/run/dpdk/spdk3 00:33:22.431 Removing: /var/run/dpdk/spdk4 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2835870 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2837149 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2838488 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2839084 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2840083 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2840193 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2841308 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2841314 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2841702 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2846799 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2848270 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2848594 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2848917 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2849257 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2849514 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2849670 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2849911 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2850227 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2851112 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2854270 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2854560 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2854857 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2854860 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2855427 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2855435 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2856004 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2856021 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2856429 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2856570 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2856711 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2856875 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2857273 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2857549 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2857876 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2861776 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2866162 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2877069 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2877905 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2882834 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2883075 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2887185 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2893040 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2895790 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2905998 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2931213 00:33:22.431 Removing: /var/run/dpdk/spdk_pid2934970 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3029576 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3034837 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3040358 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3048986 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3079584 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3085160 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3126479 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3127418 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3128394 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3129352 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3134075 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3141175 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3141985 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3142934 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3143837 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3144246 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3148611 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3148613 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3153153 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3153685 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3154224 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3154981 00:33:22.431 Removing: /var/run/dpdk/spdk_pid3155019 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3157423 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3159280 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3161252 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3163537 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3165388 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3167242 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3173358 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3173894 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3176215 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3177252 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3184255 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3186921 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3192318 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3202973 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3202975 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3222749 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3223017 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3228867 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3229386 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3231334 00:33:22.692 Removing: /var/run/dpdk/spdk_pid3234518 00:33:22.692 Clean 00:33:22.692 22:02:54 -- common/autotest_common.sh@1451 -- # return 0 00:33:22.692 22:02:54 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:22.692 22:02:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:22.692 22:02:54 -- common/autotest_common.sh@10 -- # set +x 00:33:22.692 22:02:54 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:22.692 22:02:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:22.692 22:02:54 -- common/autotest_common.sh@10 -- # set +x 00:33:22.692 22:02:54 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:33:22.692 22:02:54 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:33:22.692 22:02:54 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:33:22.692 22:02:54 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:33:22.692 22:02:54 -- spdk/autotest.sh@394 -- # hostname 00:33:22.692 22:02:54 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:33:22.966 geninfo: WARNING: invalid characters removed from testname! 00:33:44.911 22:03:15 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:45.480 22:03:17 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:47.387 22:03:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:48.767 22:03:20 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:50.671 22:03:22 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:52.050 22:03:24 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:33:53.975 22:03:25 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:53.975 22:03:25 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:33:53.975 22:03:25 -- common/autotest_common.sh@1681 -- $ lcov --version 00:33:53.975 22:03:25 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:33:53.975 22:03:26 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:33:53.975 22:03:26 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:33:53.975 22:03:26 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:33:53.975 22:03:26 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:33:53.975 22:03:26 -- scripts/common.sh@336 -- $ IFS=.-: 00:33:53.975 22:03:26 -- scripts/common.sh@336 -- $ read -ra ver1 00:33:53.975 22:03:26 -- scripts/common.sh@337 -- $ IFS=.-: 00:33:53.975 22:03:26 -- scripts/common.sh@337 -- $ read -ra ver2 00:33:53.975 22:03:26 -- scripts/common.sh@338 -- $ local 'op=<' 00:33:53.975 22:03:26 -- scripts/common.sh@340 -- $ ver1_l=2 00:33:53.975 22:03:26 -- scripts/common.sh@341 -- $ ver2_l=1 00:33:53.975 22:03:26 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:33:53.975 22:03:26 -- scripts/common.sh@344 -- $ case "$op" in 00:33:53.975 22:03:26 -- scripts/common.sh@345 -- $ : 1 00:33:53.975 22:03:26 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:33:53.975 22:03:26 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:53.975 22:03:26 -- scripts/common.sh@365 -- $ decimal 1 00:33:53.975 22:03:26 -- scripts/common.sh@353 -- $ local d=1 00:33:53.975 22:03:26 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:33:53.975 22:03:26 -- scripts/common.sh@355 -- $ echo 1 00:33:53.975 22:03:26 -- scripts/common.sh@365 -- $ ver1[v]=1 00:33:53.975 22:03:26 -- scripts/common.sh@366 -- $ decimal 2 00:33:53.975 22:03:26 -- scripts/common.sh@353 -- $ local d=2 00:33:53.975 22:03:26 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:33:53.975 22:03:26 -- scripts/common.sh@355 -- $ echo 2 00:33:53.975 22:03:26 -- scripts/common.sh@366 -- $ ver2[v]=2 00:33:53.975 22:03:26 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:33:53.975 22:03:26 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:33:53.975 22:03:26 -- scripts/common.sh@368 -- $ return 0 00:33:53.975 22:03:26 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:53.975 22:03:26 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:33:53.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.975 --rc genhtml_branch_coverage=1 00:33:53.975 --rc genhtml_function_coverage=1 00:33:53.975 --rc genhtml_legend=1 00:33:53.975 --rc geninfo_all_blocks=1 00:33:53.975 --rc geninfo_unexecuted_blocks=1 00:33:53.975 00:33:53.975 ' 00:33:53.975 22:03:26 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:33:53.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.975 --rc genhtml_branch_coverage=1 00:33:53.975 --rc genhtml_function_coverage=1 00:33:53.975 --rc genhtml_legend=1 00:33:53.975 --rc geninfo_all_blocks=1 00:33:53.975 --rc geninfo_unexecuted_blocks=1 00:33:53.975 00:33:53.975 ' 00:33:53.975 22:03:26 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:33:53.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.975 --rc genhtml_branch_coverage=1 00:33:53.975 --rc genhtml_function_coverage=1 00:33:53.975 --rc genhtml_legend=1 00:33:53.975 --rc geninfo_all_blocks=1 00:33:53.975 --rc geninfo_unexecuted_blocks=1 00:33:53.975 00:33:53.975 ' 00:33:53.975 22:03:26 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:33:53.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.975 --rc genhtml_branch_coverage=1 00:33:53.975 --rc genhtml_function_coverage=1 00:33:53.975 --rc genhtml_legend=1 00:33:53.975 --rc geninfo_all_blocks=1 00:33:53.975 --rc geninfo_unexecuted_blocks=1 00:33:53.975 00:33:53.975 ' 00:33:53.975 22:03:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:53.975 22:03:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:33:53.975 22:03:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:53.975 22:03:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:53.975 22:03:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:53.975 22:03:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.975 22:03:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.975 22:03:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.975 22:03:26 -- paths/export.sh@5 -- $ export PATH 00:33:53.975 22:03:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.975 22:03:26 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:33:53.975 22:03:26 -- common/autobuild_common.sh@479 -- $ date +%s 00:33:53.975 22:03:26 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732914206.XXXXXX 00:33:53.975 22:03:26 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732914206.NTxpIm 00:33:53.975 22:03:26 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:33:53.975 22:03:26 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:33:53.975 22:03:26 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:33:53.975 22:03:26 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:33:53.975 22:03:26 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:53.976 22:03:26 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:53.976 22:03:26 -- common/autobuild_common.sh@495 -- $ get_config_params 00:33:53.976 22:03:26 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:33:53.976 22:03:26 -- common/autotest_common.sh@10 -- $ set +x 00:33:53.976 22:03:26 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:33:53.976 22:03:26 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:33:53.976 22:03:26 -- pm/common@17 -- $ local monitor 00:33:53.976 22:03:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:53.976 22:03:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:53.976 22:03:26 -- pm/common@21 -- $ date +%s 00:33:53.976 22:03:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:53.976 22:03:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:53.976 22:03:26 -- pm/common@21 -- $ date +%s 00:33:53.976 22:03:26 -- pm/common@25 -- $ sleep 1 00:33:53.976 22:03:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1732914206 00:33:53.976 22:03:26 -- pm/common@21 -- $ date +%s 00:33:53.976 22:03:26 -- pm/common@21 -- $ date +%s 00:33:53.976 22:03:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1732914206 00:33:53.976 22:03:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1732914206 00:33:53.976 22:03:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1732914206 00:33:53.976 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1732914206_collect-cpu-load.pm.log 00:33:53.976 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1732914206_collect-vmstat.pm.log 00:33:53.976 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1732914206_collect-cpu-temp.pm.log 00:33:53.976 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1732914206_collect-bmc-pm.bmc.pm.log 00:33:54.915 22:03:27 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:33:54.915 22:03:27 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:33:54.915 22:03:27 -- spdk/autopackage.sh@14 -- $ timing_finish 00:33:54.915 22:03:27 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:54.915 22:03:27 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:54.915 22:03:27 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:33:54.915 22:03:27 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:54.915 22:03:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:54.915 22:03:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:54.915 22:03:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:54.915 22:03:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:54.915 22:03:27 -- pm/common@44 -- $ pid=3253918 00:33:54.915 22:03:27 -- pm/common@50 -- $ kill -TERM 3253918 00:33:54.915 22:03:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:54.915 22:03:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:54.915 22:03:27 -- pm/common@44 -- $ pid=3253920 00:33:54.915 22:03:27 -- pm/common@50 -- $ kill -TERM 3253920 00:33:54.915 22:03:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:54.915 22:03:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:54.915 22:03:27 -- pm/common@44 -- $ pid=3253923 00:33:54.915 22:03:27 -- pm/common@50 -- $ kill -TERM 3253923 00:33:54.915 22:03:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:54.915 22:03:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:54.915 22:03:27 -- pm/common@44 -- $ pid=3253949 00:33:54.915 22:03:27 -- pm/common@50 -- $ sudo -E kill -TERM 3253949 00:33:55.175 + [[ -n 2741161 ]] 00:33:55.175 + sudo kill 2741161 00:33:55.186 [Pipeline] } 00:33:55.202 [Pipeline] // stage 00:33:55.209 [Pipeline] } 00:33:55.225 [Pipeline] // timeout 00:33:55.232 [Pipeline] } 00:33:55.247 [Pipeline] // catchError 00:33:55.252 [Pipeline] } 00:33:55.268 [Pipeline] // wrap 00:33:55.274 [Pipeline] } 00:33:55.288 [Pipeline] // catchError 00:33:55.297 [Pipeline] stage 00:33:55.299 [Pipeline] { (Epilogue) 00:33:55.313 [Pipeline] catchError 00:33:55.314 [Pipeline] { 00:33:55.326 [Pipeline] echo 00:33:55.328 Cleanup processes 00:33:55.332 [Pipeline] sh 00:33:55.618 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:33:55.618 3254055 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:33:55.618 3254484 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:33:55.633 [Pipeline] sh 00:33:55.919 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:33:55.920 ++ grep -v 'sudo pgrep' 00:33:55.920 ++ awk '{print $1}' 00:33:55.920 + sudo kill -9 3254055 00:33:55.932 [Pipeline] sh 00:33:56.217 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:56.217 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:34:02.791 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:34:06.998 [Pipeline] sh 00:34:07.393 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:07.393 Artifacts sizes are good 00:34:07.418 [Pipeline] archiveArtifacts 00:34:07.428 Archiving artifacts 00:34:07.580 [Pipeline] sh 00:34:07.869 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:34:07.884 [Pipeline] cleanWs 00:34:07.894 [WS-CLEANUP] Deleting project workspace... 00:34:07.894 [WS-CLEANUP] Deferred wipeout is used... 00:34:07.901 [WS-CLEANUP] done 00:34:07.903 [Pipeline] } 00:34:07.921 [Pipeline] // catchError 00:34:07.932 [Pipeline] sh 00:34:08.221 + logger -p user.info -t JENKINS-CI 00:34:08.230 [Pipeline] } 00:34:08.244 [Pipeline] // stage 00:34:08.250 [Pipeline] } 00:34:08.264 [Pipeline] // node 00:34:08.269 [Pipeline] End of Pipeline 00:34:08.311 Finished: SUCCESS